Transforming Normals

The OpenGL spec states that normals should be transformed to eye space using the inverse transpose matrix.What is the reason for that?In a little program I made,I used vertex shaders.Instead of multiplying gl_Normal with gl_NormalMatrix,I multiply vec4(gl_Normal,0) with gl_ModelViewMatrix and the lighting results were the same.

If the modelview matrix contains no scale, its inverse transpose is equal to itself. Thus, there is no difference in using IT MV matrix or just MV matrix.

Try using scale in your modelview matrix, and the difference in transforming normals with IT or normal MV matrix will become visible.

Thanks for the reply.So,if modelview is only a rotation matrix,its inverse does the same job.You said that we need the IT matrix because of scaling.That is,if we use scaling,the modelview matrix will give wrong results,but the IT matrix will give correct ones?But normals are never right with scaling,even with IT matrix,they need to be renormalized.
If I use scale3f(5,5,5),for example,transform the normal using the MV matrix and renormalize,I think the results will be correct.
I don’t know about non-uniform scaling though.Is that the case we need the IT matrix?

If the modelview matrix contains no scale, its inverse transpose is equal to itself. Thus, there is no difference in using IT MV matrix or just MV matrix.
this is true only if the modelview is a rotation matrix, with no translation. if there is scaling, or translation, then the inverse transpose will not match the matrix itself, in general.

mikeman, you should always use the inverse transpose unless you know for a fact the the matrix is always a rotation matrix.