Lightning: ModelView matrix or just Model Matrix?

The basic question is: why in several lighning tutorials which I read ModelView Matrix should be used for calculating Normal vectors transformation matrix. Why not Model Matrix only?

Some details:
I use GLSL to insert light into my scene.
In vertex shader I should transform normal vector of each object’s surface by multiplying it by a normal matrix:

Normal vectors are also transformed from object coordinates to eye coordinates for lighting calculation. Note that normals are transformed in different way as vertices do. It is mutiplying the tranpose of the inverse of GL_MODELVIEW matrix by a normal vector.

[from here ]

I calculate ProjectionViewModel matrix by myself and set it in shader. For Lightning i followed the given code:

glm::mat3 mNorm = glm::inverseTranspose(glm::mat3(mModelView));

It works - I can see shades, but they move if I move camera. This obviously happens since I use glm::lookAt for calculating View Matrix.

Once I changed above formula to:

glm::mat3 mNorm = glm::inverseTranspose(glm::mat3(mModel));

everything seems to run the way it should.

why in several lighning tutorials which I read ModelView Matrix should be used for calculating Normal vectors transformation matrix. Why not Model Matrix only?

If you’re talking about the OpenGL fixed-function pipeline, then that’s because that is how OpenGL works. There is no explicit “world” space. There is model space, which represents the values in the vertex attributes. And there is eye/camera/view space, which is relative to the camera.

In fixed-function, lighting is done in camera space. The light position/direction is transformed by the modelview matrix at the moment glLight is called. Thus, in order to do lighting in camera space, the normals must be transformed into camera space.

BTW: the reason for why the lighting changed when the camera moved is because your normals were transformed into camera space, but you never transformed your light positions/directions into camera space.

There are good reasons for avoiding an explicit world space in your matrix computations. Whether for positions or normals.

Essentially, your question is really one of what space you should use to do lighting? Generally, it doesn’t matter. You can do lighting in whatever space you want, so long as you transform everything into that space. Since positions are already going to be transformed into camera space, and having an explicit world space can have the previously stated problems, camera space is a reasonable compromise space for doing lighting.

But you could do lighting in model space. Or the space tangent to a texture coordinate mapping, as is the case when doing bump-mapping.

Thanks for nice explanations and the link!
I’ve changed it and it works perfect.

Anyway, Alfonse, there is also the possibility of using integer coordinates for the camera. This is even faster than using doubles and you can easily use 64-bit integers for really big worlds. It works for me and I’ve read it also works for others.

The unit of the world must be small enough so one can make fine enough camera adjustments.