Single Values in Shaders

So I have sent my data to the GPU prooperly I imagine
int numbers[] = {1, 1, 1};
unsigned int vbo;
glGenBuffers(1, & vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, sizeof(numbers), numbers, GL_STATIC_DRAW);
glEnableVertexAttribArray(6);
glVertexAttribPointer(6,
1,
GL_INT,
GL_FALSE,
0,
0);

Now. this is my shader.

layout(location = 0) in vec3 pos;
layout(location = 1) in vec3 color;
layout(location = 2) in mat4 transformation;
layout(location = 6) in int multiple;

out vec3 out_color;

void main(){
vec4 scaled = transformation * vec4(pos * scale * multiple, 1);
gl_Position = scaled;
out_color = color;
}

Am I accessing “multiple” incorrectly? Because I am thinking multiple should be 1, but when i remove “multiple” and replace it with 1 I get a different result. I am thinking they should be the same.

You need to use glVertexAttribIPointer (note the “I”) for integer attributes.

Then WHY, ON EARTH, would they allow GL_INT to be accepted?

Thanks so much btw. Cuz I noticed it worked with floats

Ints are valid. They’re either normalised (converted to -1.0 to +1.0) or unnormalised (converted directly to float).

But if you want to pass ints to an int attribute, you need the “I” version.

Septima, you’re right. This is confusing the first time you see it, and I remember tripping over it myself years back.

Source type is a parameter. Normalize conversion on transfer Y/N is a parameter. But dest type class is a different function (?) Ehh, whatever. It works.

I don’t know for sure, but I suspect this developed because OpenGL was extended over so many years and had to preserve backward compatibility with existing APIs (names and arguments).

Indeed. glVertexAttribIPointer() was added in OpenGL 3.0, which corresponds to GLSL 1.30, which was the first version to allow attributes (vertex shader inputs) to be integers.

Given that the program (if any) which is bound at the time of a call to glVertexAttribPointer() isn’t necessarily the same one which will be bound at the time of the next draw call, an implementation can’t simply query the type of the corresponding attribute (vertex shader input) in the current program to determine whether values should be converted to floats or left as integers.

In theory, the decision could be deferred until the next draw call, but that may have consequences for both efficiency and driver complexity.

It would also have complicated the glGetVertexAttrib() interface. While the existing attribute properties are static (i.e. there’s a fairly direct correspondence between functions which set the attribute array state and the state returned by queries), the “unconverted integer” flag returned by GL_VERTEX_ATTRIB_ARRAY_INTEGER would change according to which program is bound.

If integer attributes had been supported from the start, glVertexAttribPointer() would either have an extra flag or the “normalized” parameter would have three options (normalized, unnormalized, integer) rather than two. Similar issues exist for double-precision (i.e. the need for a separate glVertexAttribLPointer() function).

To be fair, if integer vertex formats had been done like that, you simply wouldn’t have that query. The information about such is not stored in the VAO; it comes from the program. So you’d query the program to see what the type of the vertex attribute was; if it’s an integer type, then clearly the corresponding attribute in the VAO provides a proper integer.

I’d say that the double-precision issues are worse. At least with integer formats, there is a well-defined and useful purpose in passing integers that get cast directly to floats (you get to use smaller source data while still treating them as floats). So if they changed the behavior of this code, they’d be removing legitimate behavior.

Any use of GL_DOUBLE with glVertexAttribPointer is basically a one-way trip to No Performanceville. So if they changed the meaning of it, they would only be breaking people who had been doing stupid things. While that would be technically a backwards-incompatible change, it would have affected pretty much nobody. And anyone who was broken by it would probably be grateful to find out why their code was so slow.