GL_BYTE vertex attribute giving wrong data

I have a debug text rendering setup which feeds char data to a vertex/geometry/fragment shader.

On my nvidia gtx460 feeding the char* data using this as the data description:

glVertexAttribIPointer(TEXT_LOCATION, 1, GL_BYTE, 1, 0);

and having the shader input as

layout(location = TEXT_LOCATION) in int inCharacter;

works as expected, but on my ATI card the contents of inCharacter seem to be constantly 0;

As a test i changed the vertex attribute to

glVertexAttribIPointer(TEXT_LOCATION, 1, GL_INT, 4, 0);

and manually converted the const char* data to const int* data each time i update the VBO and it works as expected.

Is it OK to feed a vertex shader data in the way I originally intended?
Is there likely to be some small detail i have missed that would make it not give the correct data on an ATI card?
This is with GL+GLSL 4.2 on both machines and an AMD A10 APU for the machine with the problem.

Thank you for any help, James

i’m not sure, but according to spec for functions like glVertexAttribPointer, glDrawElements - GL_BYTE\GL_UNSIGNED_BYTE are valid types. but lately, i’ve encountered very similar issue - they don’t work with AMD cards. i was trying to fill ELEMENT_ARRAY(for a simple quad of 2 triangles) with GL_UNSIGNED_BYTE indices and draw it with glDrawElements. it failed with no error. maybe there’s special trick to make it work, or maybe AMD are actually follow the spec in this case and i was doing something wrong… i didn’t care much cause i didn’t NEED to use bytes.

So i am not alone, Thanks Nowhere-01.

I had a look through the spec and couldn’t see any problems. Maybe this is a driver issue?

I think i got closer to the cause of the problem.
I wrote this rather silly vertex shader to test things out:


//in the shader
#version 420 core
layout(location = 5) in unsigned int inByte;
out vec3 colour;
void main()
{
    if(inByte==0)
    {
        colour = vec3(1,0,0);
        gl_Position = vec4(-0.5, -0.5, 0 ,1.0);
    }
    else if(inByte==1)
    {
        colour = vec3(0,1,0);
        gl_Position = vec4(0.5, -0.5, 0 ,1.0);
    }
    else
    {
        colour = vec3(0,0,1);
        gl_Position = vec4(0.0, 0.5, 0 ,1.0);
    }
}
...
//in the C program
glBindVertexArray(vao);
glBindBuffer(GL_ARRAY_BUFFER, ab_byte);
glVertexAttribIPointer(5, 1, GL_UNSIGNED_BYTE, sizeof(unsigned char), BUFFER_OFFSET(0));
glEnableVertexAttribArray(5);

i call glDrawArrays(GL_TRIANGLES, 0, 3); with a single VBO filled with const unsigned char [] = {0,1,2}; and it renders as i expect on NV but i get nothing on ATI.

However if i create a 2nd dummy VBO filled with const float [] = {0,0,0} and set up the VAO like this:

glBindVertexArray(vao);
glBindBuffer(GL_ARRAY_BUFFER, ab_byte);
glVertexAttribIPointer(5, 1, GL_UNSIGNED_BYTE, sizeof(unsigned char), BUFFER_OFFSET(0));
//dummy buffer
glBindBuffer(GL_ARRAY_BUFFER, ab);
glVertexAttribPointer(0, 1, GL_FLOAT, GL_FALSE, sizeof(float), BUFFER_OFFSET(0));
glEnableVertexAttribArray(5);
glEnableVertexAttribArray(0);

it then also renders on the ATI card, even though i am not using the dummy VBO data at all in the shader.

So it seems like my ATI card doesn’t like having nothing except byte data.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.