PDA

View Full Version : GLSL skinning - weird bug with casting



blankoda
08-02-2010, 09:10 AM
Hi, i'm here to get help about a strange behavior of my GLSL code when i cast a float to an int and i never seen such a bug since i started GLSL

Actually i'm trying to achieve mesh skinning on CPU with GLSL

I use an ATI Radeon HD 4850 (Gainward) and i work with OpenGL 2.1 on Windows

so on CPU side I gather bones indices and weights, and i throw them to the shader with vertex attributes
then i multiply bones matrices with weights and use the result to compute normal and gl_Position

very usual till there

Problems come when I have to retrieve Bones from Bones indices

I work with a simple cylinder mesh skinned with 4 bones; I send the bones from the Cpu via a uniform variable and it seems that the problem doesnt come from these bones matrices, I hardcoded them in the shader and the bug remains
By the way the shape is correct when i compare with maya rendering

Moreover, the bones indices for each vertex are still the same : 0, 1, 2, 3 ; they're sent with a GL_UNSIGNED_INT type via VBO's; but in the shader the uvec4 doesn't seem to work (neither ivec4), so I have to use float vec4 and cast with int()

so here is my vertex shader code :



uniform mat4 ModelViewMatrix;
uniform mat4 ProjectionMatrix;
uniform mat3 NormalMatrix;

uniform mat4 bones[4];

varying vec3 normal;

attribute vec3 v_pos;
attribute vec3 v_normal;

//the 4 first bones indices of the vertex;
attribute vec4 v_boneIndices0;
//the 4 first bones weights of the vertex
attribute vec4 v_boneWeights0;

void main()
{

/* To compare with bones sent via uniform, i hardcoded the bones matrices in this array; doesn't seem to make a difference

mat4 mattab[4];

mattab[0] = mat4(1.0f, 0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f, 0.0f,
0.0f, 0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f);

mattab[1] = mat4(1.0f, 0.0f, 0.0f, 0.0f,
0.0f, 0.816489f, 0.57736f, 0.0f,
0.0f, -0.57736f, 0.816489f, 0.0f,
0.0f, -0.341108f, 1.07319f, 1.0f);

mattab[2] = mat4(0.999949f, -0.00863831f, 0.00528692f, 0.0f,
-0.00400147f, 0.142574f, 0.989776f, 0.0f,
-0.00930377f, -0.989746f, 0.142532f, 0.0f,
0.00201152f, -0.00111439f, 0.865123f, 1.0f);

mattab[3] = mat4(0.799844f, -0.0233416f, -0.599754f, 0.0f,
0.448905f, 0.686556f, 0.571949f, 0.0f,
0.398414f, -0.726702f, 0.559616f, 0.0f,
-1.23581f, -1.4976f, 2.0412f, 1.0f );

*/

//the average matrix of weighted bones
mat4 mat = mat4(0.0f);

for ( int i = 0; i < 4; i++){

//normally, the both lines below are equivalent
//(v_boneIndices0 is always 0,1,3,4)

mat += v_boneWeights0[i] * bones[int(v_boneIndices0[i])];
//mat += v_boneWeights0[i] * bones[i];

}

//Here, I multiply the average matrix with v_normal and v_pos

normal = NormalMatrix * (mat3(mat[0].xyz,mat[1].xyz,mat[2].xyz) * v_normal);

gl_Position = ProjectionMatrix*ModelViewMatrix*(mat*vec4(v_pos, 1.0f));

}



the bug appears when i'm looking at the FPS rates; without skinning, i get 2000 FPS


when i use the line


mat += v_boneWeights0[i] * bones[i];,

i get the right shape with some artefacts, the FPS remains at 2000

but when i use the line


mat += v_boneWeights0[i] * bones[int(v_boneIndices0[i])];

(which is similar to the other line) the shape is perfect but the FPS collapses to 1500 FPS...

i thought it was the line itself that slowed the loop; however if i leave this line and remove mat from normal and gl_Position computing, the FPS rate rises to 2000

another weird behavior, if I compute


vec4 test = ProjectionMatrix*ModelViewMatrix*(mat*vec4(v_pos, 1.0f));

just after the loop, and compute Normal gl_Position without mat multiplication



normal = NormalMatrix * v_normal;
gl_Position = ProjectionMatrix*ModelViewMatrix* vec4(v_pos, 1.0f);

the FPS rate also rises to 2000

It's like the bug appears only if i combine


mat += v_boneWeights0[i] * bones[int(v_boneIndices0[i])];

and

normal = NormalMatrix * (mat3(mat[0].xyz,mat[1].xyz,mat[2].xyz) * v_normal);
gl_Position = ProjectionMatrix*ModelViewMatrix*(mat*vec4(v_pos, 1.0f));


I think it's a hardware bug, or maybe i'm doing it wrong, I don't know
But I'm sure the problem is related to the cast int()
And by the way, why uvec4 and ivec4 don't work ?

has anyone encountered this kind of behaviour with Nvidia ? I found nothing on Google

I hope you'll be able to help me

Thanks :)

blankoda
08-02-2010, 02:13 PM
found a solution, not the best though

i declared the bones indices vbo with GL_FLOAT type instead of
GL_UNSIGNED_INT

but ivec4 and uvec4 are still an issue

Dark Photon
08-02-2010, 04:02 PM
attribute vec4 v_boneIndices0;
...
Moreover, the bones indices for each vertex ... they're sent with a GL_UNSIGNED_INT type via VBO's; but in the shader the uvec4 doesn't seem to work (neither ivec4)
...
why uvec4 and ivec4 don't work ? ... has anyone encountered this kind of behaviour with Nvidia ?
Haven't used them but what make you think so?

Do you realize that there are separate attribute pointer set calls for integer attributes?:

* glVertexAttribPointer* (http://www.opengl.org/sdk/docs/man3/xhtml/glVertexAttribPointer.xml) (GL3.3)
* glVertexAttribPointer* (http://www.opengl.org/sdk/docs/man4/xhtml/glVertexAttribPointer.xml) (GL4.1)