Why the normal vector is transformed by modelview matrix's inverse?

According to OpenGL document, in the per-vertex operations stage of processing, each vertex’s spatial coordinates are transformed by the modelview matrix, while the normal vector is transformed by that matrix’s inverse. WHY? I don’t understand.

Would someone explain that for me? I will be very thankful for your answer.

Doggle

from nvidia’s presentation about per-pixel lighting:
You may know from the Red Book or various other sources that “normals are transformed by the inverse-transpose of the modelview matrix”, but let’s consider why…

  1. Translation of position does not affect normals.
  2. Rotation is applied to normals just like it is to position.
  3. Uniform scaling of position does not affect the direction of normals
  4. Non-uniform scaling of position does affect the direction of normals!

Thank you gvm,

According to the Red Book, the normal vector is just multiplied by the inverse of modelview matrix. I tried to make it clear by a specific example. I found it is wrong.

As you mentioned, it is the inverse-transpose matrix. What is inverse-transpose? Inverse and transpose, which computation should be imposed to the modelview matrix first?

the 4 items you said is very correct, but how can we get the right transformed normal with a modelview matrix that mix up translation & rotation, uniform & non-uniform scaling? By some simple operations like transpose and inverse, can irrelevant components bye eliminated?

I have refered to some books on computer graphics, but none give some explanation for this detailed problem.

Doggle

Originally posted by gvm:
[b]from nvidia’s presentation about per-pixel lighting:
You may know from the Red Book or various other sources that “normals are transformed by the inverse-transpose of the modelview matrix”, but let¡¯s consider why…

  1. Translation of position does not affect normals.
  2. Rotation is applied to normals just like it is to position.
  3. Uniform scaling of position does not affect the direction of normals
  4. Non-uniform scaling of position does affect the direction of normals![/b]

you could easily get inverse-transpose matrix from vertex-program(state.matrix.modelview.invtrans).
but if you want make it from just the modelview matrix then (as ARB_vertex_program specification says) use transpose of the inverse matrix(i mean first do inverse then do transpose)
note that modelview matrix mixes not only translation, rotation and scaling, but view(camera) transforms!
Modelview = ViewTransforms*WorldTransforms!
maybe i’m not right somewhere…
this transformed normal could be used in lighting computation…

Thank you gvm,

You are so good a person!

Now I know how OpenGL transforms vertex normal. But still don’t know the principle of this operation. Is it precise computation or only an quick approximation?

While modelview matrix may contain very complex transforms, I can not figure out a simple way to transform normals. Given the problem, I will solve it as follow:

1.Get a 3d point that is in the normal line and is in the out side of the surface.

2.Transform that point with modelview matrix just as a vertex.

3.Substract that point by corresponding vertex to get the transformed normal under consideration.

Does what OpenGL does have identical result?

Originally posted by gvm:
you could easily get inverse-transpose matrix from vertex-program(state.matrix.modelview.invtrans).
but if you want make it from just the modelview matrix then (as ARB_vertex_program specification says) use transpose of the inverse matrix(i mean first do inverse then do transpose)
note that modelview matrix mixes not only translation, rotation and scaling, but view(camera) transforms!
Modelview = ViewTransforms*WorldTransforms!
maybe i’m not right somewhere…
this transformed normal could be used in lighting computation…

Dear gvm,

Thank you for your help. I find the key in OpenGL Red Book Appendix G.

Relevant words as below:

Transforming Normals
Normal vectors don’t transform in the same way as vertices, or position vectors. Mathematically, it’s better to think of normal vectors not as vectors, but as planes perpendicular to those vectors. Then, the transformation rules for normal vectors are described by the transformation rules for perpendicular planes.
A homogeneous plane is denoted by the row vector (a , b, c, d), where at least one of a, b, c, or d is nonzero. If q is a nonzero real number, then (a, b, c, d) and (qa, qb, qc, qd) represent the same plane. A point (x, y, z, w)T is on the plane (a, b, c, d) if ax+by+cz+dw = 0. (If w = 1, this is the standard description of a euclidean plane.) In order for (a, b, c, d) to represent a euclidean plane, at least one of a, b, or c must be nonzero. If they’re all zero, then (0, 0, 0, d) represents the “plane at infinity,” which contains all the “points at infinity.”

If p is a homogeneous plane and v is a homogeneous vertex, then the statement “v lies on plane p” is written mathematically as pv = 0, where pv is normal matrix multiplication. If M is a nonsingular vertex transformation (that is, a 4 ¡Á 4 matrix that has an inverse M-1), then pv = 0 is equivalent to pM-1Mv = 0, so Mv lies on the plane pM-1. Thus, pM-1 is the image of the plane under the vertex transformation M.

If you like to think of normal vectors as vectors instead of as the planes perpendicular to them, let v and n be vectors such that v is perpendicular to n. Then, nTv = 0. Thus, for an arbitrary nonsingular transformation M, nTM-1Mv = 0, which means that nTM-1 is the transpose of the transformed normal vector. Thus, the transformed normal vector is (M-1)Tn. In other words, normal vectors are transformed by the inverse transpose of the transformation that transforms points. Whew!

Originally posted by gvm:
you could easily get inverse-transpose matrix from vertex-program(state.matrix.modelview.invtrans).
but if you want make it from just the modelview matrix then (as ARB_vertex_program specification says) use transpose of the inverse matrix(i mean first do inverse then do transpose)
note that modelview matrix mixes not only translation, rotation and scaling, but view(camera) transforms!
Modelview = ViewTransforms*WorldTransforms!
maybe i’m not right somewhere…
this transformed normal could be used in lighting computation…