glVertexAttribPointer

Using glVertexAttribPointer gives you more options for vertex formatting - but is there any reliable mapping to the conventional attributes? For example, can I safely use attrib n to feed data into vertex color? I know you bind the IDs for GLSL (which works of course but seems inconvenient to me), but if I’m using fixed function rendering, am I just SOL if I want to use this function? Specs seem to state that vertex attrib pointers alias the ‘classic’ vertex attributes, but don’t detail any specific mapping (other than maybe 0 to position)…

if I’m using fixed function rendering, am I just SOL if I want to use this function?
The purpose of glVertexAttribPointer is to pass data to shaders. If you’re using fixed-function, AttribPointer is of no value to you.

nVidia hardware uses an explicit numerical mapping, but ATi doesn’t specify one, and neither does the specification.

Using glVertexAttribPointer gives you more options for vertex formatting
Those extra formatting options don’t matter if you’re using fixed-function. The driver will still expect that data, and it will have to be provided by something, if not your application.

The extra formatting options do matter - suppose I want to buffer my vertex normals as unsigned bytes to reduce my data size - unless I can use VertexAttribPointer with fixed function rendering this is not possible, as glTexCoordPointer and glNormalPointer don’t allow this format (signed bytes aren’t supported by my hardware natively it seems). So from the sounds of it either I use shaders everywhere, or I pick a different format. Thank you for your help!

suppose I want to buffer my vertex normals as unsigned bytes to reduce my data size
For normals, there’s no reason for using unsigned bytes as opposed to regular bytes, as normals are fundamentally signed quantities. And glNormalPointer does indeed support signed bytes.

As for texture coordinates, you only gain one bit of precision by using unsigned values rather than signed ones.

For the record, signed bytes aren’t natively supported on my hardware (geforce 7900), using them makes things quite slow.

I’ve had some strange results with signed shorts in glTexCoordPointer, they seem fast enough but the values I’m getting are huge and its almost like they aren’t normalized.

But anyway its all academic, since it seems the answer to my question is ‘no’.

Related thread of mine: http://www.opengl.org/discussion_boards/ubb/ultimatebb.php?ubb=get_topic;f=3;t=015252

Signed shorts are also fast! Signed shorts and unsigned bytes on nVidia - that what I’ve mentioned some years ago. But bytes must be normalized (they got [0…1] range), while shorts must be non-normalized (with [-32768.0…+32767.0] range), so you should map them to your range in shader.