glVertexAttribIPointer and vectors

I can’t get glVertexAttribIPointer to work with shader inputs of type ivec3.

If I do glVertexAttribIPointer(0, 1, GL_BYTE, stride, offset), it works fine with a shader input variable declared as “int”. But when I do glVertexAttribIPointer(0, 3, GL_BYTE, stride, offset), I can’t get it to work with shader type “ivec3”.

What am I missing?

A related question is using the following combined with “vec3” and “float”, respectively:

glVertexAttribPointer(0, 3, GL_BYTE, GL_FALSE, stride, offs);
glVertexAttribPointer(3, 1, GL_BYTE, GL_FALSE, stride, offs);

The second line will generate a warning, but not the first.

glDrawArrays uses input attribute 'VERTEX_ATTRIB[3]' which is specified as 'type = GL_BYTE size = 1'; this combination is not a natively supported input attribute type.

That is, if I change GL_BYTE of the second statement to GL_FLOAT, there will be no warnings. The problem can’t be GL_BYTE, can it? As it is accepted by the first example. The byte with the specific problem is located at an even 4-byte border, and if I move it I will also get a warning about non-optimal alignment. I am using AMD with OpenGL 4.2.

I’m guessing that it that your GPU doesn’t like byte vectors. I know some GPU technically support double precision, but it will be slow because it will convert them to floats on the fly.

Any particular reason you 8bit vector attributes?

[QUOTE=Lazy Foo’;1241778]I’m guessing that it that your GPU doesn’t like byte vectors. I know some GPU technically support double precision, but it will be slow because it will convert them to floats on the fly.

Any particular reason you 8bit vector attributes?[/QUOTE]
I have very large amount of data, so I need to pack it as efficiently as possible.

By the way, I found that the warning is there as defining a vector of 3 bytes is ok, but 1 or 2 bytes give the warning. I can accept that, but I find it hard to believe it is a bug in the AMD driver to not allow ivec3? Or am I using something unspecified?

Define “does not work.”

Or am I using something unspecified?

As far as the spec goes it’s correct. Does anything change if you switch to glVertexAttribIPointer()?

How do you calculate stride and offset?