I am trying to implement simple directional lightning. While simple diffuse is easy thing to do, the problems come with specular component.
When viewer (camera) moves, the specular highlight is supposed to move with it. The problem is that, I can’t achieve that effect.
I have read, the correct thing to do is just multiply the normal by normal matrix, and the normal matrix is just inversed and transposed ModelView matrix.
While I’m using GLM library, i get View matrix by calling glm::LookAt with specified parameters, and multiply it with my model’s matrix to get the modelview matrix. That matrix, named NMTX in shaders is sent as uniform and is multiplied with normal in vertex shader like this:
fragmentNormal = normalize(NMTX * vertexNormal);
Then, fragmentNormal is being sent to fragment shader:
This code generates the specular highlights fine, but they only change when the camera rotates. I am aware as this may be coordinate space issue, as light direction and halfway vector are in model coordinates, and normals are in eye coordinates (after multiplying by the matrix in vertex shader). I tried to overcome that by multiplying theese two vectors by normal matrix too, but then the specular highlight just went completely static.
Ignoring performance issues in the above shaders (as I just want to understand the issue I described), what am I doing wrong?
Make sure all the relevant vectors (i.e. light direction, half vector and normal) are in the same coordinate space, preferably eye/camera space, and that they are normalized prior to any calculation. Code you posted is incomplete and it’s hard to tell what could be wrong. Post the complete shader code.
This looks like it should work fine if everything is in camera space. Let’s see how you calculate the normal matrix and the half vector.
Note that if you want fixed light direction in world space, you shouldn’t multiply the light vector with the normal matrix because it will rotate the light together with the model, which, if I correctly interpreted your description, might be precisely what’s happening.
I must say I have sent modified fragment shader in the previous post - that version makes the reflection to be completely static - with no rotations, and with no movement as well.
Multiplying halfvector and light direction with normal matrix might be the problem. To bring these vectors to eye space you should multiply them with inverse transpose of the view matrix, not the modelview matrix. To avoid confusion, best to do this prior to sending it to shader.
So on cpu side:
do the camera (view) transformations
the current matrix now represents the view matrix (world-to-camera matrix)
calculate the inverse transpose of the view matrix (world-to-camera normal matrix)
do the model transformations on top of camera transformations
the current matrix now represents the modelview matrix (model-to-camera matrix, i.e. your regular modelview matrix)
calculate the inverse transpose of the modelview matrix (model-to camera normal matrix, i.e. your regular normal matrix)
- send model-to-camera matrix as modelview matrix to the shader (as you’re already doing) - send model-to-camera normal matrix as normal matrix to the shader (as you’re already doing)
calculate light direction and halfvector in world space (as you’re already doing)
multiply halfvector and light direction with world-to-camera normal matrix to bring them to camera space - send camera space halfvector and light direction to shader
In vertex shader:
multiply the input normal with normal matrix and send it as varying to fragment shader (as you’re already doing)
In fragment shader:
calculate the specular term using normal (which was transformed to camera space in vertex shader), light direction and halfvector (which were transformed to camera space on the cpu side)
So, just to clarify. Correct me if I’m wrong, and I’m sure I am somewhere
Doing the camera transformations, which you mentioned, is equivalent to calling glm::LookAt function, right?
After I got that matrix I calculate the model-view matrix and send it inversed and transposed to the shader and use it only for the normals.
Here comes my another question - do I calculate model-view matrix with regular view matrix, or with inverse transposed one?
Then i calculate half vector and light direction by multiplying them by inversed transposed view matrix. The half vector needs to be calculated from camera’s direction and light’s direction, so do we calculate it with the original light direction, or with the multiplied one?
When I do all of these things, I still got static reflection. Here is the code. I have moved the matrix calculations to shaders to make the code simpler to understand for me just for now.
Calculating the matrices:
glm::mat4 v, p; // v - View matrix, p - Projection matrix
game->defaultCamera->GetMtx(&p, &v);
MVP = p * v * modelMatrix; // Model-View-Projection
MV = v * modelMatrix; // Model-View
NMTX = glm::mat3(glm::inverse(glm::transpose(MV))); //
[QUOTE=overTaker;1257648]So, just to clarify. Correct me if I’m wrong, and I’m sure I am somewhere
Doing the camera transformations, which you mentioned, is equivalent to calling glm::LookAt function, right?[/quote]
Yes.
Yes
[QUOTE=overTaker;1257648]
Here comes my another question - do I calculate model-view matrix with regular view matrix, or with inverse transposed one?[/quote]
Regular.
The important thing is to be consistent with your coordinate spaces when doing calculations. If your camera view direction and light direction are given in world space, when you add them you get the halfvector in world space. Now all these 3 vectors are in world space. You need them in camera space for the shader, so you multiply them with world-to-camera matrix and send them to the shader (or do it in shader if you prefer). Direct answer to your question is: you can either:
halfvector_worldSpace = lightDirection_worldSpace + viewDirection_worldSpace;
halfvector_cameraSpace = matrixWorldToCamera * halfvector_worldSpace;
lightDirection_cameraSpace = matrixWorldToCamera * lightDirection_worldSpace;
viewDirection_cameraSpace = matrixWorldToCamera * viewDirection_worldSpace; // should calculate to (0, 0, 1) when normalized
In both cases ending up with same values for lightDirection_cameraSpace and halfvector_cameraSpace, which are the vectors you need to calculate the specular term.
Yes
You’re still multiplying your light direction and halfvector with the modelview matrix instead of view matrix. So instead of:
Where V is the view matrix you get from your camera after calling LookAt().
[QUOTE=overTaker;1257648]
I can’t also quite understand one thing: what is the purpose of performing inverse-transpose on matrices?[/QUOTE]
To get rid of problems with non-proportional scaling of vectors. For simplicity, you can skip the inverse transposes if you’re not using non-proportional scaling. More on this here: http://www.lighthouse3d.com/tutorials/glsl-tutorial/the-normal-matrix/
And now you’re left with only 3 matrices: modelview matrix - used to transform normals from model space to camera space modelviewprojection matrix - used to transform vertices from model space to screen space view matrix - used to transform light vectors from world space to camera space
From what I can tell, the simply most important thing is to make sure, all the coordinate spaces for all the input vectors - normal, half vector and light direction are consistent. So, they don’t really need to be in world space at the very beginning. Everything can be done on model space (as it is the simplest one in my code to use).
In fact, the strange thing for me here is that, if I take away the model matrix from anything, it doesn’t really make any difference (well, except for MVP matrix of course, but it isn’t the case in this topic). I wonder why is that.
I was able to achieve effect I wanted to have, by doing what you adviced, but instead of this:
So all three input vectors are in their model (or better called local) coordinates.
And also made normal multiplied by inverse transposed view matrix, instead of inverse transposed model view matrix (I have learned that the model matrix in the equation doesn’t change anything).
The difference in above snippets is simply that I made W component to be 1, instead of 0. This is very strange to me as well. I have read that 0 component is used for directions, as when such a vector is multiplied with matrix, the translation is ignored. Here, the vectors are both directions, so why is the W = 1.0f working fine?
I can still tell, the effect I got this way, is much too sharp, than it propably should be.
Well w should be 0.0 when you’re multiplying a vector. The difference that actually made it work is, I think, the minus sign. Which makes sense since you’re calculating the light direction vector as position-target instead of properly: target-position. This reverses the light direction and probably screws the halfvector too if the view vector is properly directed. So try to reverse the operands when calculating the direction vector, keep w at 0.0 and remove the minus sign:
If your model is not transformed relative to world space then your model matrix will be identity and your view matrix will be equal to modelview matrix. But normally you want your shader to behave properly on transformed models so distinction between modelview and view mactices is necessary.
That explained a lot While it’s time to rearange my code to make it easier to perform the calculations, which is pretty hard right now I must thank you for taking a time to explain theese things to me!