Per fragment lighting shader problems

Hello!
I’m trying to write a rather simple per fragment lighting shader with a single moving light source. I’m using OpenGL 2.1 and GLSL 1.20. I’m having some wierd problems and starting to get really lost.

I’m using manually a separate view matrix and a model matrix, instead of the built in gl_ModelViewMatrix. The idea was to easily calculate all lighting effects ignoring the camera position.

It would seem that the geometry itself renders correctly. But the normals calculation give me bogus results.

The current code is:

Vertex Shader:

varying vec3 v_V;
varying vec3 v_N;
varying vec3 v_V2;
varying vec3 v_N2;
uniform mat4 viewMat;
uniform mat4 modelMat;

void main()
{
	gl_Position = gl_ProjectionMatrix * viewMat * modelMat * gl_Vertex;
	
	/* Model space. */
	v_V = (modelMat * gl_Vertex).xyz;	
	v_N = normalize(mat3(transpose(modelMat)) * gl_Normal);
	
	/* Eye space. */
	v_V2 = (viewMat * modelMat * gl_Vertex).xyz;
	v_N2 = normalize(vec3(transpose(viewMat * modelMat) * vec4(gl_Normal,0.0)));
	
	gl_TexCoord[0] = gl_MultiTexCoord0;
}

Fragment Shader:

varying vec3 v_V;
varying vec3 v_N;
varying vec3 v_V2;
varying vec3 v_N2;
uniform sampler2D tex;

void main()
{
	vec4 texel = texture2D(tex,gl_TexCoord[0].st);
	
	if(texel.a == 0.0)
		discard;
	{
		vec3 N = normalize(v_N);
		vec3 R = reflect(normalize(v_V2), normalize(v_N2));
		vec3 L = normalize(vec3(gl_LightSource[0].position - v_V));
	
		vec4 ambient = gl_FrontMaterial.ambient * gl_LightModel.ambient + gl_FrontMaterial.ambient * gl_LightSource[0].ambient;
		vec4 diffuse = gl_FrontMaterial.diffuse * max(dot(L, N), 0.0) * gl_LightSource[0].diffuse;

		vec4 color = ambient + diffuse;
		
		/* I guess should optimize this. */
		if(gl_FrontMaterial.shininess > 0.0)
		{
			vec4 specular = gl_FrontMaterial.specular * pow(max(dot(R, L), 0.0), gl_FrontMaterial.shininess) * gl_LightSource[0].specular;
			color += specular;
		}
		
		gl_FragColor = texel * color;		
	}
}

I know that the problematic part is

v_N = normalize(mat3(transpose(modelMat)) * gl_Normal);

Because normally I’d use gl_NormalMatrix but I’m trying to calculate the equivalent manualy.

The thing is, that to my knowledge, if there were no scaling transformations

v_N = normalize(mat3(modelMat) * gl_Normal);

would be correct. However in this case the end result looks as if the normals are not transformed at all.

Now using

v_N = normalize(mat3(transpose(modelMat)) * gl_Normal);

gives more reasonable results, but I still get errors:
the global Z axis by an variable angle and then -90 degrees along the global X axis. So that it normals are pointing to (0.0,1.0,0.0) and seems to be rotating along the global Y axis. However the effect is that instead of the normals being constantly (0.0,1.0,0.0) they change and from (0.0,-1.0,0.0) to (-1.0,0.0,0.0) to (0.0,1.0,0.0) to (1.0,0.0,0.0) in a cycle.

Does anyone know what I am doing wrong? Does anyone have any hints on that kind of shader I am trying to achieve?

Thanks a lot :slight_smile:

Firstly you are doing the right thing (future proofing) by supplying your own model and view matricies - but then start to use the in-built gl_projection matrix!
Although there’s nothing wrong with this, you may as well pass in your own projection matrix too IMHO.

gl_Position = gl_ProjectionMatrix * viewMat * modelMat * gl_Vertex

To do what you need to do in these shaders you need to declare #version 120 at the top in both shaders

eg

#version 120


v_N = normalize(mat3(transpose(modelMat)) * gl_Normal);

I think you can simplify with the following in your vertex shader:


varying vec3 _Normal;

mat4 _modelView = viewMat * modelMat;
mat4 _modelViewproj = gl_ProjectionMatrix * _modelView;

mat3 _modelView33 = mat3 (_modelView);				//casting to mat3 requires version 120
mat3 _normalMatrix = mat3 (transpose(_modelView33));

gl_Position = _modelviewproj  * gl_Vertex;
_Normal = normalize (_normalMatrix* gl_Normal);

you won’t then need:
varying vec3 v_V;
varying vec3 v_N;
varying vec3 v_V2;
varying vec3 v_N2;

I know that the problematic part is

There’s a lot more wrong with this than that.

First, why are you computing the world-space position and normal? You only need the position and normal in view/eye/camera space.

Second, your normal transform is wrong. If you’re expecting non-uniform scales in your model-view matrix, then you need to transform it by the inverse transpose of the model-view matrix. And you shouldn’t be doing inverses or transposes in GLSL to begin with; compute it on the CPU and pass it up as a separate 3x3 matrix.

Third, your fragment shader logic is confused. For example, the direction to the light is probably wrong. v_V is the world-space position, while the light position from the OpenGL structure is intended to be in view/eye/camera space. Everything should be in the same space for the fragment shader.

Thanks for the answers :slight_smile:
I think I figured it out and it would seem the theory was ok after all. It was my assets.
Apparently the meshes I got, had exchanged the x and z axes. When I modified the assets everything started to work as intended.

Thank You for Your responses. However they fuelled some more questions on my side:

Why do I need that? I didn’t think it’s necessary and… the shader fails to compile with that included…

Ok, but how do I calculate diffuse and specular colour values without these variables?

Because I want the light to be moving independently from the camera. For that I need to compute everything in world space so that the eye/view/camera transform doesn’t effect lighting. Is it wrong?

I know it’s wrong, that’s the whole point! I was wondering why using just the transpose provides more proper results than using the inverse transpose (which I tried) or using the non modified matrix.

Well as I said I want the lighting transformations to be done in model space in order not to be affected by the camera. Is that wrong? And if so, why?

Because I want the light to be moving independently from the camera. For that I need to compute everything in world space so that the eye/view/camera transform doesn’t effect lighting. Is it wrong?

You have a camera position in world space. Transform it to camera space with the camera’s matrix. Then you have the camera’s position in camera space. See? Problem solved, and you don’t need a useless and potentially dangerous world-space matrix. Just go straight from model space to camera space.

I was wondering why using just the transpose provides more proper results than using the inverse transpose (which I tried) or using the non modified matrix.

The answer is in your question. And it probably explains why you think the mesh data is wrong. If you have to transpose a matrix to make the math work, that can only be because the matrix was already transposed.

So your matrix conventions are probably not correct. In C/C++, you have some object or array or something to store your matrix data in. You then pass an array of floats to glUniformMatrix to upload it. Two questions:

1: There is a boolean parameter to glUniformMatrix. Do you pass GL_TRUE or GL_FALSE?

2: The array of floats you pass in. What are the indices of the translation components of this array? Are the translations in index 12, 13, and 14, or 3, 7, and 11?

Sorry I don’t fully get how this should work :frowning: Could You provide a code snippet?

Actually now I’m sure the mesh data was wrong. I verified it in another app. That is to be expected because, the code:

v_N = normalize(mat3(modelMat) * gl_Normal);

now gives proper results.

Could You provide a code snippet?

I can provide this.

Ok. I’ve read the page a couple of times now and I’m still not 100% sure what is Your point :stuck_out_tongue:

(BTW. The whole book seems like a great resource, thanks :slight_smile: )

However I noticed that I can get rid of the model matrix, by using the gl_ModelView matrix instead.

Vertex Shader:

varying vec3 v_V;
varying vec3 v_N;
varying vec3 v_V2;
varying vec3 v_N2;
uniform mat4 viewMat;

void main()
{
        gl_Position = gl_ProjectionMatrix * viewMat *  gl_ModelViewMatrix * gl_Vertex;
        
        /* Model space. */
        v_V = (gl_ModelViewMatrix * gl_Vertex).xyz;       
        v_N = gl_NormalMatrix * gl_Normal;
        
        /* Eye space. */
        v_V2 = (viewMat * gl_ModelViewMatrix * gl_Vertex).xyz;
        v_N2 = mat3(viewMat) * gl_NormalMatrix * gl_Normal;
        
        gl_TexCoord[0] = gl_MultiTexCoord0;
}

However the only gain I can see here is beeing able to use gl_NormalMatrix and don’t worry about scalling.

Obviously I need to have a view matrix and a second one. What so wrong in having a seperate model matrix? Won’t I need one in OpenGL 3+ anyway?

Ok. I’ve read the page a couple of times now and I’m still not 100% sure what is Your point

You asked how to do lighting in camera space. I linked you to a detailed explanation, complete with source code, of how to do lighting which has an entire section on lighting and spaces.

Obviously I need to have a view matrix and a second one. What so wrong in having a seperate model matrix?

This. It’s generally best to avoid having an explicit world space in shaders.

All you “need” is a matrix that goes from model-space to camera-space, and a matrix that goes from camera-space to clip-space. And the only reason you need two instead of one is because you need a space in which to do lighting. And clip-space is a 4D homogeneous space that isn’t linear, so lighting in it isn’t a good idea.

I think I get it now. I’ll try to reorganize my shaders accordingly. Thanks very much for You help :slight_smile:

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.