Yezu666

05-12-2011, 08:00 AM

Hello!

I'm trying to write a rather simple per fragment lighting shader with a single moving light source. I'm using OpenGL 2.1 and GLSL 1.20. I'm having some wierd problems and starting to get really lost.

I'm using manually a separate view matrix and a model matrix, instead of the built in gl_ModelViewMatrix. The idea was to easily calculate all lighting effects ignoring the camera position.

It would seem that the geometry itself renders correctly. But the normals calculation give me bogus results.

The current code is:

Vertex Shader:

varying vec3 v_V;

varying vec3 v_N;

varying vec3 v_V2;

varying vec3 v_N2;

uniform mat4 viewMat;

uniform mat4 modelMat;

void main()

{

gl_Position = gl_ProjectionMatrix * viewMat * modelMat * gl_Vertex;

/* Model space. */

v_V = (modelMat * gl_Vertex).xyz;

v_N = normalize(mat3(transpose(modelMat)) * gl_Normal);

/* Eye space. */

v_V2 = (viewMat * modelMat * gl_Vertex).xyz;

v_N2 = normalize(vec3(transpose(viewMat * modelMat) * vec4(gl_Normal,0.0)));

gl_TexCoord[0] = gl_MultiTexCoord0;

}

Fragment Shader:

varying vec3 v_V;

varying vec3 v_N;

varying vec3 v_V2;

varying vec3 v_N2;

uniform sampler2D tex;

void main()

{

vec4 texel = texture2D(tex,gl_TexCoord[0].st);

if(texel.a == 0.0)

discard;

{

vec3 N = normalize(v_N);

vec3 R = reflect(normalize(v_V2), normalize(v_N2));

vec3 L = normalize(vec3(gl_LightSource[0].position - v_V));

vec4 ambient = gl_FrontMaterial.ambient * gl_LightModel.ambient + gl_FrontMaterial.ambient * gl_LightSource[0].ambient;

vec4 diffuse = gl_FrontMaterial.diffuse * max(dot(L, N), 0.0) * gl_LightSource[0].diffuse;

vec4 color = ambient + diffuse;

/* I guess should optimize this. */

if(gl_FrontMaterial.shininess > 0.0)

{

vec4 specular = gl_FrontMaterial.specular * pow(max(dot(R, L), 0.0), gl_FrontMaterial.shininess) * gl_LightSource[0].specular;

color += specular;

}

gl_FragColor = texel * color;

}

}

I know that the problematic part is

v_N = normalize(mat3(transpose(modelMat)) * gl_Normal);

Because normally I'd use gl_NormalMatrix but I'm trying to calculate the equivalent manualy.

The thing is, that to my knowledge, if there were no scaling transformations

v_N = normalize(mat3(modelMat) * gl_Normal);

would be correct. However in this case the end result looks as if the normals are not transformed at all.

Now using

v_N = normalize(mat3(transpose(modelMat)) * gl_Normal);

gives more reasonable results, but I still get errors:

the global Z axis by an variable angle and then -90 degrees along the global X axis. So that it normals are pointing to (0.0,1.0,0.0) and seems to be rotating along the global Y axis. However the effect is that instead of the normals being constantly (0.0,1.0,0.0) they change and from (0.0,-1.0,0.0) to (-1.0,0.0,0.0) to (0.0,1.0,0.0) to (1.0,0.0,0.0) in a cycle.

Does anyone know what I am doing wrong? Does anyone have any hints on that kind of shader I am trying to achieve?

Thanks a lot :)

I'm trying to write a rather simple per fragment lighting shader with a single moving light source. I'm using OpenGL 2.1 and GLSL 1.20. I'm having some wierd problems and starting to get really lost.

I'm using manually a separate view matrix and a model matrix, instead of the built in gl_ModelViewMatrix. The idea was to easily calculate all lighting effects ignoring the camera position.

It would seem that the geometry itself renders correctly. But the normals calculation give me bogus results.

The current code is:

Vertex Shader:

varying vec3 v_V;

varying vec3 v_N;

varying vec3 v_V2;

varying vec3 v_N2;

uniform mat4 viewMat;

uniform mat4 modelMat;

void main()

{

gl_Position = gl_ProjectionMatrix * viewMat * modelMat * gl_Vertex;

/* Model space. */

v_V = (modelMat * gl_Vertex).xyz;

v_N = normalize(mat3(transpose(modelMat)) * gl_Normal);

/* Eye space. */

v_V2 = (viewMat * modelMat * gl_Vertex).xyz;

v_N2 = normalize(vec3(transpose(viewMat * modelMat) * vec4(gl_Normal,0.0)));

gl_TexCoord[0] = gl_MultiTexCoord0;

}

Fragment Shader:

varying vec3 v_V;

varying vec3 v_N;

varying vec3 v_V2;

varying vec3 v_N2;

uniform sampler2D tex;

void main()

{

vec4 texel = texture2D(tex,gl_TexCoord[0].st);

if(texel.a == 0.0)

discard;

{

vec3 N = normalize(v_N);

vec3 R = reflect(normalize(v_V2), normalize(v_N2));

vec3 L = normalize(vec3(gl_LightSource[0].position - v_V));

vec4 ambient = gl_FrontMaterial.ambient * gl_LightModel.ambient + gl_FrontMaterial.ambient * gl_LightSource[0].ambient;

vec4 diffuse = gl_FrontMaterial.diffuse * max(dot(L, N), 0.0) * gl_LightSource[0].diffuse;

vec4 color = ambient + diffuse;

/* I guess should optimize this. */

if(gl_FrontMaterial.shininess > 0.0)

{

vec4 specular = gl_FrontMaterial.specular * pow(max(dot(R, L), 0.0), gl_FrontMaterial.shininess) * gl_LightSource[0].specular;

color += specular;

}

gl_FragColor = texel * color;

}

}

I know that the problematic part is

v_N = normalize(mat3(transpose(modelMat)) * gl_Normal);

Because normally I'd use gl_NormalMatrix but I'm trying to calculate the equivalent manualy.

The thing is, that to my knowledge, if there were no scaling transformations

v_N = normalize(mat3(modelMat) * gl_Normal);

would be correct. However in this case the end result looks as if the normals are not transformed at all.

Now using

v_N = normalize(mat3(transpose(modelMat)) * gl_Normal);

gives more reasonable results, but I still get errors:

the global Z axis by an variable angle and then -90 degrees along the global X axis. So that it normals are pointing to (0.0,1.0,0.0) and seems to be rotating along the global Y axis. However the effect is that instead of the normals being constantly (0.0,1.0,0.0) they change and from (0.0,-1.0,0.0) to (-1.0,0.0,0.0) to (0.0,1.0,0.0) to (1.0,0.0,0.0) in a cycle.

Does anyone know what I am doing wrong? Does anyone have any hints on that kind of shader I am trying to achieve?

Thanks a lot :)