Okay, I have custom vertex attributes I want to store for the bone indice and weight. Actually, in this case I don’t even need the weight, since I am just doing 1 bone per vertex.
I have an array of bone indices in system memory that goes with the vertex array. How do I send that data to the GPU as a vertex attribute? I have stored them as bytes, and I just have a rule that no mesh can have more than 256 bones.
Attribute variable name-to-generic attribute index bindings can be specified at any time by calling glBindAttribLocationARB . Attribute bindings do not go into effect until glLinkProgramARB is called. Once a program object has been linked successfully, the index values for attribute variables remain fixed (and their values can be queried) until the next link command occurs.
Generic vertex attribute 0 is unique in that it has no current state, it is used to indicate the completion of a vertex just like a call to glVertex. In short, don’t use attribute with index 0.
glGetAttribLocationARB queries the previously linked program object specified by program for the attribute variable specified by name , and returns the index of the generic vertex attribute that is bound to that attribute variable. If name is a matrix attribute variable, then the index of the first column of the matrix is returned. If the named attribute variable is not an active attribute in the specified program object, or if name starts with the GLSL reserved prefix _gl , a value of -1 is returned.
An attribute variable (either built-in or user-defined) is considered active if it is determined during the link operation that it may be accessed during program execution.
Like I said, at linking time it has determined that BoneIndice is not an active attribute. You’re assigning it to a temporary variable that is not used in the fragment program. You could assign it to a varying variable which you must access in your fragment shader. Here’s an example where it returns the correct value:
Okay, let’s say I wanted to allow up to 4 bones per vertex.
Is there any way to pack 4 byte values into a 32-bit integer and read it in the shader? I mean, obviously I could do this on the CPU, but is this a good idea on the GPU? Or should I just use a vec4 attribute, and have 4 times the amount of data uploaded?
Sure, you could use them like you send the color value. This can be a 32 bpp value where the four 8-bit channels are packed into a 32-bit value. gl_Color is a built-in attribute that is defined as a vec4 so when you access it in the shader they are converted to floating point within the range [0…1.0]. You just have to multiply it with 255 to get the integer values.
Indeed, with glVertexAttribPointer you can set the size to 4 and the type to GL_UNSIGNED_BYTE. Within the shader however, they are interpreted as floats, just like fetching a GL_RGBA8 pixel with original unsigned byte values of 255,255,127,0 would result in 1.0,1.0,0.5,0.0 when accessing gl_Color. So int(gl_color.z*255) will get you the original value of 127.