Vertex Attributes

Hi

I have always been using old-style vertex arrays, that means i always used glVertexPointer, glNormalPointer, etc. to set up my vertex-arrays.

Now i wanted to switch to general vertex attributes, but ran into a few problems.

  1. In another thread someone mentioned, one could not interleave arrays, when using general vertex attributes. So far i don’t see, why it should be like that. Did i miss something?

  2. Regarding the stride parameter. I just noticed, that my vertex-arrays are always working fine, although i pass different stride-values. For example for position i always passed 12 as stride, but 0 works also. Specifications are a bit unclear. Does the stride mean the offset i need to add to the address of vertex n to get to vertex n+1, or does it mean an additional delta that needs to be added. For example, if i use 3 component data, but add an additional fourth component as padding for better alignment, the stride would be the size of the fourth component?

I assume it is the first case, and for example for position the stride of 0 is regarded by the driver as 12 (sizeof position).

  1. I use vertex attribute array 3 (color), but in my shader gl_Color seems not to be set.
    Also the GLSL specification says nothing about general vertex attributes, so i assume i am forced to read gl_Color in the vertex shader, instead of gl_Attrib[3] or something? A bit inconsequent, i think. Especially, if the values are actually not passed into the variables. Or am i doing something wrong?

This is on an ATI X1600.

Thanks,
Jan.

  1. There is no generic variant of InterleavedArrays. Just store your data in the same layout you did before, and call glVertexAttribPointer multiple times (once per attribute).
struct my_vertex {
   float position[4];
   float color[3];
   float pad;
}
struct my_vertex *vertices = ...;

glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, sizeof(struct my_vertex), vertices);
glVertexAttribPointer(3, 3, GL_FLOAT, GL_FALSE, sizeof(struct my_vertex), offsetof(struct my_vertex, color); 
  1. The stride is whatever you want it to be. If you want to upload 3xFloat values, but want to leave storage for 4 for alignment reasons, you can do so. In that case, the stride would be 16 if you didn’t have any other vertex attributes interleaved in.

As you guessed, 0 is shorthand for ‘This attribute is tightly packed’.

  1. Unlike ARB programs, there is no aliasing between generic and fixed function attributes. If you want your color to be in vertex attribute 3, you should use glBindAttribLocation() to bind index 3 to a specific attribute name in the glsl shader.

ie, your shader would include, at the top:
“attribute vec3 my_color;”

And in the CPU-side program
glBindAttribLocation(program_name, 3, “my_color”)
glLinkProgram(…)

One more thing to be aware of. When using DrawArrays/DrawElement, you must have either the VertexPointer fixed function attribute enabled, or generic attribute 0 enabled. If you allow OpenGL to automatically assign all of your attribute bindings (by not calling BindAttribLocation(0) yourself), it is allowed to not do that automatically (it seems silly, but there are some good reasons).

Great, that answers most of my questions.

Although i dislike the idea that i need to bind my arrays to some variable.

As i have heard especially ATI did have many problems with generic vertex attributes in their drivers. Is that still true today or can i use it without hesitation?

Thanks,
Jan.

Although i dislike the idea that i need to bind my arrays to some variable.
Well, perhaps adding gl_VertexAttrib[n] to GLSL would be ok, but I wouldn’t use it. I prefer to have my attributes named (tangent, normal, etc.) - makes both shader and application code look a lot cleaner.

As i have heard especially ATI did have many problems with generic vertex attributes in their drivers. Is that still true today or can i use it without hesitation?
The best option is not to mix built-in attributes with generic attributes. That also means you bind vertex position to attribute 0.
To be honest, I do mix these and it works on both GeForce and Radeon GPUs.

Although i dislike the idea that i need to bind my arrays to some variable.
You can also do it the other way round, you just let GL do the binding and query the attribute number given the name. You still have to bind the attribute 0 (or just keep using gl_Vertex).

I’ve never noticed any problems with generic attributes on ATI hardware (and I’m currently working on Linux/IA64, which is an even worse combination than Linux/x86 when it comes to ATI drivers :stuck_out_tongue: ).

Hm, right now i am playing with generic vertex attributes and it doesn’t work, at all.

When i use glVertexPointer everything works fine. My test-data only contains position-information.
However, when i replace it with glVertexAttribPointer and bind it to index 0, my app crashes at glDrawElements because of accessing a NULL pointer.

Jan.

The moment i use ANY generic vertex attribute, my app crashes at the next drawcall.

Is there any example, how to use it, that works on ATI? I searched everywhere, but this seems to be a rarely used feature.

I added an attribute to a shader, checked for its location (which was 1) and set my color array to it. I set the position array the old way, since binding to location 0 does not work, at all. However, only specifying an array at location 1, yields a crash already.

Any hints, examples, whatever are very appreciated.

Thanks,
Jan.

This I how I do it in my game. I used to have version 1 but after encountering some driver bugs I got a suggestion to use glMapBuffer (version 2) - that didn’t solve problem (it was something else). Anyway, bot approaches work on GeForce and Radeon.

Init (version 1):

  glGenBuffersARB(1, &vertexBufferHandle);
  glBindBufferARB(GL_ARRAY_BUFFER_ARB, vertexBufferHandle);
  glBufferDataARB(GL_ARRAY_BUFFER_ARB, vertexCount * sizeof(GLfloat) * 3, (const void*)vertexArray, GL_STATIC_DRAW_ARB);

Init (version 2):

  glGenBuffersARB(1, &vertexBufferHandle);
  glBindBufferARB(GL_ARRAY_BUFFER_ARB, vertexBufferHandle);
  glBufferDataARB(GL_ARRAY_BUFFER_ARB, vertexCount * sizeof(GLfloat) * 3, (const void*)vertexArray, GL_STATIC_DRAW_ARB);
  glBufferDataARB(GL_ARRAY_BUFFER_ARB, vertexCount * sizeof(GLfloat) * 3, NULL, GL_STATIC_DRAW_ARB);
  void* pointer = glMapBufferARB(GL_ARRAY_BUFFER_ARB, GL_WRITE_ONLY_ARB);
  memcpy(pointer, (const void*)vertexArray, vertexCount * sizeof(GLfloat) * 3);
  glUnmapBufferARB(GL_ARRAY_BUFFER_ARB);

Use:

  glBindBufferARB(GL_ARRAY_BUFFER_ARB, vertexBufferHandle );
  glVertexPointer(3, GL_FLOAT, 0, (const void*)NULL);
  glEnableClientState(GL_VERTEX_ARRAY);
  glDrawElements(...);
  glDisableClientState(GL_VERTEX_ARRAY);

Jan:
The moment i use ANY generic vertex attribute, my app crashes at the next drawcall.
I don’t know if this helps, but I experienced something similar when I first used them. You need to Keep your attribute set and enable calls consistent.

That is, you should either use:

[ul][li]gl{Vertex,Color,etc.}Pointer()[*]glEnable/DisableClientState()[/ul][/li]OR you should use:

[ul][li]glVertexAttribPointer()[*]gl{Enable,Disable}VertexAttribArray[/ul][/li]Do not mix these up within an attribute! If you do, you’ll get a GL error and the draw will fail (on NVidia).

Yes, i heard about that. Therefore i limited my tests to only use the generic vertex attribute 0, for position, nothing else. However, it just crashes.

If i mix (old position array, but color as generic attribute) it also crashes.

It just crashes at the first glDrawElements, the moment i only touch generic vertex attributes in any way.

The thing i wonder about is, that i had one array of generic vertex attributes enabled a few days ago and it didn’t crash (i didn’t know i had to call a function to bind it to an attribute in glsl, therefore it didn’t fully work either). But i cannot get it to work again, i don’t know, what i do differently now. Seems there is some other state, that produces a conflict, i just don’t know what else i changed.

Thanks for all your suggestions, i keep trying to find out, what’s the problem.

Jan.

Ok, i’ve been an idiot, but now i got it to work.

In glVertexAttribPointer i passes as “size” (second parameter) the size of the array in bytes.

However it does expect the number of components of the attribute (1,2,3 or 4). And due to some tests, the code that also contained “glGetError” was commented out, which meant, that i didn’t see the “invalid value” error, that GL threw at me.

Thanks for all your help and clarifications!

Back to speed optimizations…

Jan.

Is it possible, that generic vertex attributes are not well supported under nVidia? I got my code working just fine on ATI now, but on a friends PC with a Geforce 7 i get corrupted output (polygons missing) and program crashes.

Any other ideas, what could cause such behaviour? My rendering code is pretty much straight-forward.

i have worked a lot with these vertex attributes - no problems on nvidia till now

Same for me - no problems with vertex attributes on both ATI and Nvidia.