Vertex Array Range Performance Problem

I just converted the code in the game I’m working on to use NVidia’s VAR extension. The thing is, in scenes with a lot of detail, it seems like I was actually getting better performance using plain vertex arrays. But, if I turn to face a wall or corner in a room (so that there are relatively few polygons), the frame is better with VARs.

After experimenting, I noticed that if I disable the texture arrays (it’s multi-textured), the performance more than doubles in some places! The polys are still being textured (although with seemingly random texture coordinates now that the tex array is disabled), so I dont’ think it’s directly related to the card’s texture or fill-rate capabilities. (It’s a GForce2, btw)

So, why would enabling texture arrays slow it down so much? Right now, I’m using an array that has the position, color, and texture coordingates all in one. Every thing is defined as a float, so it should be 4-byte aligned.

Thanks,
Marc

Have you made sure that the vertex array range is valid for all buffer calls?

Note that you can only use GL_SIGNED_SHORT and GL_FLOAT for your texture coordinates (and other data items) except colors, where you should use GL_UNSIGNED_BYTE. At least, that’s what I’ve gleaned from reading the various nVIDIA specs.

If the VAR is not valid (alignment, data type, whatever) then the driver has to unpack your array and re-pack it into some other array, which is slow. You can figure out if this is happening by running VTune. If you get a lot of samples in some routine which seems to be reading and writing data in triplets and tuplets, you’re very likely seeing a driver read-back.