Array seperation...

Hmm, I’ve put a lot of thought into this one…

Currently OpenGL works with Arrays in a way that consumes more memory and performance for example…

Say we have a texturemapped cube (8 vertices, 4 normals and 4 texcoords)…

With out arrays we have these calculations :
24 vertex transformations (8 vertices)
6 vector transformations (6 normals)
24 texcoord transformations (4 texcoords)
and consumes 0 array elements

Using OpenGLs current arrays for vertices only we have these calculations :
8 vertex transformations (8 vertices)
6 vector transformations (6 normals)
24 texcoord transformations (4 texcoords)
and consumes 24 (8*3) array elements

Using OpenGLs current arrays for vertices normals and texcoords we have these calculations :
8 vertex transformations (8 vertices)
8 vector transformations (6 normals)
8 texcoord transformations (4 texcoords)
and consumes 64 (83+83+8*2) array elements

If we seperated the arrays and instead of having a single function like glArrayElement, we had functions like glArrayVertex glArrayNormal glArrayTexCoord, we would have reduced it down to its bare minimum of calculations :
8 vertex transformations (8 vertices)
6 vector transformations (6 normals)
4 texcoord transformations (4 texcoords)
and consumes 50 (83+63+4*2) array elements

It may not seem like much, but lets say we had a highly detailed object, it would make a much bigger diffrence, a model of a character for example of 1024 vertices and a texture mapped mirrored on both sides of him would be like this with the current array :
1024 vertex transformations (1024 vertices)
1024 vector transformations (1024 normals)
1024 texcoord transformations (512 texcoords)
consumes 8192 array elements

And with seperate arrays :
1024 vertex transformations (1024 vertices)
1024 vector transformations (1024 normals)
512 texcoord transformations (512 texcoords)
consumes 7168 array elements

You can see though that any time the arrays are diffrent in sizes causes over calculation and excess useage of memory. Like when you use flat shading (one normal per poly instead of one normal per vertex) or in texture mapping. And in reality, depending on the scene graph architecture having this rigid array system can add some over head to it, my current scene graph conserves memory and is flexible, allowing for flat shaded polys and smooth shaded polys to coexist on the same object, doing this presents issues in which I have to either use an vertex array alone, or risk duplication and over calculation.

This is a suggested feature, although I am curious why this was not implemented in the first place, I recall a time when people were debating how much function calls effect performance and having glArrayElement reduces function calls by alot, I’m curious to know if this is a would be reason?

I’d also like to talk about DrawArrays but that is for another time