[C++] Passing array to opengl

I don’t have much experience with glsl yet and I’ve run into a problem I can’t figure out.
I have an array that contains 4 integer values for each vertex of my mesh.
Since these values never change, I figured the best way of doing it would be by using a buffer.
Below isn’t the actual code, just a representation.
Generating the buffer(C++):


int numValues = vertexCount *4;
int *values = new int[numValues];
for(int i=0;i<numValues;i++)
{
	values[i] = 0; // For testing purposes
}
GLuint buf;
glGenBuffers(1,&buf);
glBindBuffer(GL_ARRAY_BUFFER,buf);
glBufferData(GL_ARRAY_BUFFER,numValues *sizeof(int),&values[0],GL_STATIC_DRAW);
delete values;

When I retrieve the values using glGetBufferSubData they’re correct, so I’m assuming so far it’s fine.

Inside the shader(GLSL):


layout(location = 3) in int values[4];

Passing the buffer(C++):


glEnableVertexAttribArray(3);
glBindBuffer(GL_ARRAY_BUFFER,buf);
glVertexAttribPointer(
	3, // location inside the shader
	4, // 4 integer values per vertex
	GL_INT,
	GL_FALSE,
	0,
	(void*)0
);
glDrawArrays(GL_TRIANGLES,0,vertexCount);
glDisableVertexAttribArray(3);

However, as far as I can tell, the values inside the ‘values’ array in the shader are incorrect.
I wouldn’t be surprised if I did this entirely wrong, although it looks fine to me.
Any suggestions?

4, // 4 integer values per vertex

No, that’s 4 floating-point values per-vertex.

glVertexAttribPointer can [i]only[/i] feed floating-point attributes. It can feed them integers, but they will be converted to floats (either with normalization or without). If you want GLSL to receive a real, true integer, then you must use glVertexAttribIPointer.

Thanks, I’ve changed that.

glEnableVertexAttribArray(3);
glBindBuffer(GL_ARRAY_BUFFER,buf);
glVertexAttribIPointer(
	3,
	4,
	GL_INT,
	0,
	(void*)0
);
layout(location = 3) in int values[4];

The problem still persists however.
I’ve checked the buffer data again, that’s definitely correct, so it’s caused either by the shader code or the glVertexAttribIPointer-call.

Also, the first integer inside ‘values’ appears to be correct for all vertices, it’s just the other three that are wrong.
What could be the cause of that?
I’ve checked the size of GLint and int, and they’re the same, so that should be fine. (I’ve tested it with GLint to be sure and got the same result.)

Each input variable(attribute) is essentially a 4-element vector. Attributes declared as scalars only use the first element of the vector. Matrices use one attribute per column. For attributes declared as arrays, each array element is a separate attribute (or even multiple attributes for arrays of matrices).

So the above code actually declares 4 attributes, where values[0] has index 3, values[1] has index 4, and so on. The interface would be identical had you used:

layout(location = 3) in ivec4 values[4];

and replaced values[0] with values[0].x etc.

You should probably declare values as a single ivec4 rather than as an array of int. Vectors can be accessed using subscript notation (like arrays) as well as member notation (like structures).

If you absolutely must use an array (e.g. because the size might be increased beyond 4 in future), you’ll need a separate glVertexAttribIPointer() call for each element, e.g.:


for (int i = 0; i < 4; i++)
    glVertexAttribIPointer(3+i,1,GL_INT,4*4,(void*)(i*4));