View Full Version : interleaved arrays are supposed to be faster, aren't they ?

07-09-2002, 01:53 AM
Hi !

I'm running on a geforce3 ti200, athlon xp, DDR memory, windows 98.

I've replaced all my odinary vertex/normal/texcoord arrays by interleaved arrays. I was expecting to improve the rendering speed.

In some cases, I've gotten exactly the same speed; in some other cases, I've gotten 1 or 2% less speed (can't be totally sure of this, the exact viewpoint was not easy to reproduce). Formats were GL_T2F_N3F_V3F and GL_C4UB_V3F. Modes were GL_TRIANGLE_STRIP, GL_TRIANGLES, GL_POINTS.

I'd like to know if that could be fixed in a future version of the nvidia drivers, or if it's inherent to the geforce3 architecture. In the latter case, are there other hardwares on which interleaved arrays are faster ? Or are interleaved arrays a wholly useless thing ?

Thanks for any advice !

[This message has been edited by Morglum (edited 07-09-2002).]

07-09-2002, 02:35 AM
I'm not sure how cache friendly interleaved arrays are. Did you try them using VAR, CVAs or just plain ol' arrays?

I've changed over to using them recently only because its helpful to have all vertex information in a single structure, rather than spread out over several arrays. I haven't noticed a difference in speed at all - and I wouldn't mind 1 - 2% anyway.

07-10-2002, 12:06 AM
I don't know about CVA's or VAR's. Are they vendor-dependant ? If not, that do you have a link to a tutorial, or some sample code ?
Thanks !