Dragon

12-28-2014, 01:43 PM

Banged by head for hours against the wall to figure out why a shader didn't work just to figure out that as it looks like GLSL can not deal with integer vertex input attributes although the specs clearly state it should. Take this code:

glVertexAttribPointer( 0, 3, GL_FLOAT, GL_FALSE, 16, pointer );

glVertexAttribPointer( 1, 1, GL_INT, GL_FALSE, 16, pointer+12 );

This defines a vec3 and int input attribute. The shader looks like this:

#version 140

uniform samplerBuffer texWeightMatrices;

in vec3 inPosition;

in int inWeight;

out vec3 outPosition;

void main( void ){

vec4 row1 = texelFetch( texWeightMatrices, inWeight*3 );

vec4 row2 = texelFetch( texWeightMatrices, inWeight*3 + 1 );

vec4 row3 = texelFetch( texWeightMatrices, inWeight*3 + 2 );

outPosition = vec4( inPosition, 1.0 ) * mat3x4( row1, row2, row3 );

gl_Position = vec4( 0.0, 0.0, 0.0, 1.0 );

}

This totally fails resulting in wrong values written to the result VBO. Doing this on the other hand:

// same as above

in float inWeight;

// same as above

int weight = int( inWeight ) * 3; // and now using weight instead of inWeight*3

This works correctly. According to GLSL specs "4.3.4 Inputs" though this should be correct:

Vertex shader inputs can only be float, floating-point vectors, matrices, signed and unsigned integers and integer vectors. They cannot be arrays or structures.

What's going on here? Why is it impossible to use "in int" although the specs clearly say so?

glVertexAttribPointer( 0, 3, GL_FLOAT, GL_FALSE, 16, pointer );

glVertexAttribPointer( 1, 1, GL_INT, GL_FALSE, 16, pointer+12 );

This defines a vec3 and int input attribute. The shader looks like this:

#version 140

uniform samplerBuffer texWeightMatrices;

in vec3 inPosition;

in int inWeight;

out vec3 outPosition;

void main( void ){

vec4 row1 = texelFetch( texWeightMatrices, inWeight*3 );

vec4 row2 = texelFetch( texWeightMatrices, inWeight*3 + 1 );

vec4 row3 = texelFetch( texWeightMatrices, inWeight*3 + 2 );

outPosition = vec4( inPosition, 1.0 ) * mat3x4( row1, row2, row3 );

gl_Position = vec4( 0.0, 0.0, 0.0, 1.0 );

}

This totally fails resulting in wrong values written to the result VBO. Doing this on the other hand:

// same as above

in float inWeight;

// same as above

int weight = int( inWeight ) * 3; // and now using weight instead of inWeight*3

This works correctly. According to GLSL specs "4.3.4 Inputs" though this should be correct:

Vertex shader inputs can only be float, floating-point vectors, matrices, signed and unsigned integers and integer vectors. They cannot be arrays or structures.

What's going on here? Why is it impossible to use "in int" although the specs clearly say so?