Okay, I’ve written my bada**, all purpose game engine. Who hasn’t these days? Currently, I’ve just written a completely generic, texture shader class that supports infinite blend modes and stages, fully utilizing the max texture units supported in hardware, doing (num stages/max texture units) multiple passes if necessary.
My only problem is that my data was/is originally stored as interleaved arrays whose UVs/STs will only affect GL_TEXTURE0_ARB. Is there a way to either specify texcoords per stage, or have each stage recognize my texcoords without having to rearrange my data into independent vector/uv/color/normal arrays?
That’s what I was preparing to do, blanking out on the fact that I could specify a stride value, considering reorganization instead.
Now, is a stride of zero any more effecient than a stride of, say, 20? Meaning, is different code used internally when there is no stride to save iteration?
Using glBlahPointer(…) for the different elements with the stride set to the size of my CVertex class, which contains a single texcoord, color, vector, and normal, was much faster than the interleaved arrays for some reason. The only thing I don’t like is that currently, I set the client active texture and glTexCoordPointer for every existing texture unit GL_TEXTURE0_ARB + nTexUnit. nTexUnit for my GeForce2 is from 0 to 1. That’s only temporary though.