How do I get a 3x3 Normal Matrix of a model using row major Matrices?

I’m learning lighting, and noticed that most tutorials use GLM (for good reason I’m sure). GLM apparently uses column major matrices, but in my case I preferred using row major for my own math library.

I’m trying to get the normal matrix of a model for my lighting calculations. A lot of tutorials suggest:

mat3 normalMatrix = transpose(inverse(mat3(model)));

Others suggest taking the upper left 3x3 matrix found within the modelview matrix. I guess in order to remove the translation portion…since we only need the rotation.

If I take my modelview matrix (viewMatrix * modelMatrix), and just take…


// row major
[0][0] to [0][2]
[1][0] to [1][2]
[2][0] to [2][2]

…from my 4x4 modelView matrix, would that essentially be my 3x3 normal matrix that I can send to the GPU?

Note that OpenGL uses column-major matrices unless told otherwise, e.g. with the row_major layout qualifier in GLSL, or setting the transpose parameter to GL_TRUE in glUniformMatrix().

That would be the top-left 3x3 matrix.

If it isn’t orthonormal, you’d still need to find the inverse transpose to use it as a normal matrix. If the matrix is in row-major order, sending it to OpenGL as column-major will perform the transpose implicitly, but you would still need to perform the inversion somewhere.

If it is orthonormal, then you can just use the top-left 3x3 submatrix, or even just use the 4x4 matrix directly to transform normals which have been converted to a vec4 with a zero W component.

Also, note that if the bottom row is [0 0 0 1] (i.e. the matrix doesn’t contain any projective component), then the inverse of the top-left 3x3 sub-matrix is the same as taking the top-left 3x3 submatrix of the inverse (i.e. the translation component in the right-hand column won’t affect the rotation component when computing the inverse). Inverting a 3x3 matrix is cheaper than a 4x4 matrix, but unless you’re doing that a lot, if you already have a 4x4 inverse there’s no need to add a 3x3 inverse just for this.

[QUOTE=GClements;1283827]Note that OpenGL uses column-major matrices unless told otherwise, e.g. with the row_major layout qualifier in GLSL, or setting the transpose parameter to GL_TRUE in glUniformMatrix().

That would be the top-left 3x3 matrix.

If it isn’t orthonormal, you’d still need to find the inverse transpose to use it as a normal matrix. If the matrix is in row-major order, sending it to OpenGL as column-major will perform the transpose implicitly, but you would still need to perform the inversion somewhere.

If it is orthonormal, then you can just use the top-left 3x3 submatrix, or even just use the 4x4 matrix directly to transform normals which have been converted to a vec4 with a zero W component.

Also, note that if the bottom row is [0 0 0 1] (i.e. the matrix doesn’t contain any projective component), then the inverse of the top-left 3x3 sub-matrix is the same as taking the top-left 3x3 submatrix of the inverse (i.e. the translation component in the right-hand column won’t affect the rotation component when computing the inverse). Inverting a 3x3 matrix is cheaper than a 4x4 matrix, but unless you’re doing that a lot, if you already have a 4x4 inverse there’s no need to add a 3x3 inverse just for this.[/QUOTE]

Thanks you for the answer.

I actually did set the transpose option in glUniformMatrix to true. I also looked at model view matrix, and the last row has 0, 0, 0 ,1. So it doesn’t contain perspective component.

I’m sorry about the following stupid question, but how do I know if my modelview matrix is orthonormal? I’ve looked up lots of articles before replying, but I still don’t know how to find that out. I imagine one of the features of an orthonormal mat would be containing normalized values.

The most I found on Google was: if I multiply the modelview matrix with the its transposed self…if it’s orthonormal: you get the identity matrix. Not sure if I got my facts straight, but if I did… my modelview matrix is not orthonormal. So in that case I’d have to invert and transpose the modelview matrix?

Orthonormal means that all axis vectors are perpendicular to the other ones, and that all their length is equal to 1.

This generally means that you don’t perform any scalings (so their length remain the same).

For the perpendicularity, this is more hard to ensure, but this is generally the case if you use some LookAt routines (one is deduced from the 2 previous ones with a cross product), or do rotations logically (if you rotate around one axis, the 2 others need to be rotated with the same angle).

If all of this is ensured, as GClements said, just use your modelview matrix.

If it is constructed solely from rotations and translations, the matrix (or rather the upper-left 3x3 submatrix) will be orthonormal.

If it also includes uniform scale (i.e. the same scale factor for all 3 axes), it will be orthogonal (all axes perpendicular) but not normal (all axes unit length). Such a matrix will preserve orthogonality but not scale. So you can still use it as a normal matrix provided that you re-normalise the normal vectors afterwards (this is why legacy OpenGL has glEnable(GL_RESCALE_NORMAL), but that doesn’t affect shaders).

If the transformation includes a combination of rotations and non-uniform scaling, then in general it won’t be orthogonal, so transforming normal vectors by it will result in them no longer being normal (perpendicular) to whatever they were perpendicular to.

In short: if you don’t use scaling when constructing the matrix, then it’s guaranteed to be orthonormal, and if you use only uniform scaling it’s guaranteed to be orthogonal

This is an unnecessary constraint. Any combination of rotations is always orthonormal.

If you’re referring to my model matrix being a multiplication of translation, rotation, and scaling matrices, that’s what I got for my model matrix and what I multiplied against my view matrix. So I guess it’s not orthonormal, not to mention that if I observe the resulting modelView matrix, it doesn’t contain normalized values. I’m still kind of stumped on how to get the normal matrix, might have to brush up a little more on this topic, my apologies. :frowning:

[QUOTE=hashbrown;1283841]If you’re referring to my model matrix being a multiplication of translation, rotation, and scaling matrices, that’s what I got for my model matrix and what I multiplied against my view matrix. So I guess it’s not orthonormal, not to mention that if I observe the resulting modelView matrix, it doesn’t contain normalized values.
[/QUOTE]
So the next question would be whether it contains non-uniform scaling (different scale factors along different axes). If it only contains uniform scaling, then normals will still be perpendicular but will need to be re-normalised.

[QUOTE=hashbrown;1283841]
I’m still kind of stumped on how to get the normal matrix, might have to brush up a little more on this topic, my apologies. :([/QUOTE]
The normal matrix is the inverse of the transpose of the top-left 3x3 submatrix of the model-view matrix.

Note that for an orthonormal matrix, the inverse is equal to the transpose, so the inverse of the transpose is just the original matrix.

You can compute the inverse of an arbitrary matrix using Cramer’s rule. For a 3x3 matrix, this results in:


[ m11 m22 - m12 m21   m02 m21 - m01 m22   m01 m12 - m02 m11 ]
[ m12 m20 - m10 m22   m00 m22 - m02 m20   m02 m10 - m00 m12 ]
[ m10 m21 - m11 m20   m01 m20 - m00 m21   m00 m11 - m01 m10 ]

divided by the determinant of the original matrix, which is


   m00 (m11 m22 - m12 m21)
 + m01 (m12 m20 - m10 m22)
 + m02 (m10 m21 - m11 m20)

Note that each row of the inverse is the cross product of two columns of the original matrix: row 0 is the cross product of columns 1 and 2, row 1 of columns 2 and 0, row 2 of columns 0 and 1.

The determinant is the triple product, i.e. the dot product of one column of the original matrix with the cross product of the other two columns (it doesn’t matter which column you choose, so long as the cross product uses the correct order; using rows instead of columns also works).

Also: the inverse of the transpose is equal to the transpose of the inverse, so you can swap rows with columns if you wish.

Sure. I was meaning, if the OP plans to rotate axis vectors around the x axis, he will need to rotate both y and z axis with the same angle. And so on for rotating around the other axis. Otherwise the coordinate system is not orthogonal anymore.

I would also suggest the OP to use an existing mathematics library.

glm is very common nowadays and avoids a lot of harms.

I also could find this one on google.

Finally I would like to say that the term “normal matrix” could be troubling at first. But when thinking about it twice, why would one rotate the normals in a different way than the geometry. The main issue here, as said, is about the scaling (as long as the coordinate system have unit vectors all perpendicular to themselves), since the scaling use the diagonal components of the matrix, which are also used for the rotation part.
One simple thing to avoid this harm is simply not to perform any scaling at all, which, in general, is easy, since this is absolutely not common to scale thing up and down in live…

[QUOTE=Silence;1283847]
Finally I would like to say that the term “normal matrix” could be troubling at first. But when thinking about it twice, why would one rotate the normals in a different way than the geometry. The main issue here, as said, is about the scaling (as long as the coordinate system have unit vectors all perpendicular to themselves), since the scaling use the diagonal components of the matrix, which are also used for the rotation part.
One simple thing to avoid this harm is simply not to perform any scaling at all, which, in general, is easy, since this is absolutely not common to scale thing up and down in live…[/QUOTE]
Well, sometimes you need objects at different scales, and applying a scale transformation may be preferable to creating an entirely new mesh which is identical to an existing mesh except for the scale factors.

Also, just to make it clear why scaling means that you can’t just transform normals by the model-view matrix:

In the original, the red and blue lines are perpendicular. After scaling, they aren’t.

Two column vectors X and Y are perpendicular if and only if XT.Y=0. If We’re going to transform X by a matrix M, we need to transform Y by some matrix N so that (M.X)T.(N.Y)=0. Using the identity (A.B)T=BT.AT, this becomes (XT.MT).(N.Y)=0 => XT.(MT.N).Y=0. If MT.N is the identity matrix then this reduces to XT.Y=0. And MT.N is the identity matrix if and only if N=(MT)-1, i.e. N is the inverse of the transpose of M.

To ensure that angles remain the same, homotheties could help :slight_smile: In that case, the normals will remain well orientated, they will just have a non-unit size. But it is then easy to scale them accordingly.

I agree that it is sometimes necessary to perform some scalings with different factors for each dimension. Then, of course, you need to calculate this normal matrix properly.

Thank you both GClements and Silence. The explanations were honestly a lot better than what I find on StackOverflow. I’m a lot more clear now, I did have to write my own matrix invert and transpose functions, but that was fun.

[QUOTE=Silence;1283847]I would also suggest the OP to use an existing mathematics library.

is very common nowadays and avoids a lot of harms.

I also could find this one on google.

Finally I would like to say that the term “normal matrix” could be troubling at first. But when thinking about it twice, why would one rotate the normals in a different way than the geometry. The main issue here, as said, is about the scaling (as long as the coordinate system have unit vectors all perpendicular to themselves), since the scaling use the diagonal components of the matrix, which are also used for the rotation part.
One simple thing to avoid this harm is simply not to perform any scaling at all, which, in general, is easy, since this is absolutely not common to scale thing up and down in live…[/QUOTE]

Thanks but I already used GLM and it’s great, but I didn’t like the feeling of writing a functions like perspective or lookat, and not knowing what they were doing. I was writing a bunch of magic functions, and they worked, but I wasn’t having fun. I guess I get a kick out of re-inventing wheels. :biggrin-new:

I would however use GLM if I had to do something professionally right now.

Thanks again guys!

The difference between row major and column major is merely a matter of populating the array horizontally or vertically. Converting should be very straight forward and easy.

The LookAt function just creates a 4 by 4 matrix where the end is a (4 value) position and the first 3 are the mutually perpendicular vectors representing a private axis. Notice I don’t care whether we’re talking about columns or rows here as long as we pick one and stick with it throughout. The inner 3 by 3 is just the mutually perpendicular vectors with no position. But the 4 by 4 can work with position, orientation, and scale all simultaneously. Combine it with a rotation matrix and it will rotate the position, orientation, and scale (which can be a problem if you don’t do it right: imagine scaling the vertices away from the world origin instead of the object’s center).

So, LookAt takes 3 parameters: Camera position, place to look towards, and “Up”. Up is just the area “above” the camera.

Before you really get started on this you need to know vector algebra. If you’re shaky on that, that’s where you need to start and I suggest my Vector video on my YouTube channel. It’s two hours long, but I think it’s well worth your time if you don’t have a really solid understanding of vectors already. It’s long, but I’m trying to shove an entire semester of Linear Algebra into about 3 hours of video including my Matrix video. The Matrix video actually briefly covers LookAt around 50 minutes in. Again, that’s probably well worth your time if you don’t already have a strong understanding of matrices and how they are used in game programming.

But anyway, the 3 by 3 portion of the 4 by 4 matrix holds 3 mutually perpendicular vectors that represent a “private” X, Y, and Z axis. They have to remain mutually perpendicular at all times just like any other 3D axis. This is much easier to do than it sounds, because they almost never loose their relationship to one another as long as you do the math as matrix algebra and merely multiply one matrix times another. A rotation matrix times an object’s matrix will rotate all 3 axes together simultaneously and thus preserve their relationship to one another without you having to do anything to maintain that relationship.

That “private” axis in the matrix describes a “difference” from the world axes. So, when it aligns with the world axis, it is an empty matrix also known as an “identity matrix”. Rotate it and it describes an orientation change from the world axis. Translate it (with a 4 by 4 matrix) and the position value will describe an offset from the world origin.

This allows you to place objects into your world.

When you import your model, it’s going to come from a file. That file is going to have all the vertices in the position they were in in your modeling program when you modeled it. Generally, that’s going to have the model centered on the world origin. So, all your models will be on the world origin on top of one another if this is all you did. The object’s world matrix allows you to have a matrix that holds the position and orientation for each model. Each model gets at least one (more matrices per model is a little more advanced topic - starting off just do one per object).

There are 3 changes you need to apply to the positions of the model’s vertices in order to make the miracle of 3D programming. The first (world/object matrix) places the vertices of the model into the scene. The second (view matrix) prepositions them to simulate a camera. The third projects them to a 2D surface so that they can be drawn on your 2D computer screen (the back buffer actually which is just a 2D array of pixel colors that will be used to draw the screen).

You can make a 4by4 matrix of the object’s world matrix, the view matrix, and the position matrix. Then you can combine those 3 into one matrix that contains all that information in a single matrix. Then you can apply it to every vertex in the model one at a time (or in reality the graphics card does massive parallelism where it processes hundreds or thousands of vertices simultaneously all with the save world-view-projection matrix). And that vertex will be positioned perfectly on the 2D screen so that you can draw triangles between the vertices and shade them in (rasterization). And your 3D world will magically appear on the 2D computer screen.

The matrix algebra is a highly efficient way of doing the whole drawing process. You’re never messing with the original vertices and risking losing their relationships between one another. In other words, you’re not changing the actual model and so you can always go back to that or reuse it for several instances of the same model all placed differently in the scene by different world matrices.

But more to the point: the LookAt matrix uses Vector Cross Products as a way to build 3 mutually perpendicular vertices. The camera position to the “spot to look at” forms a vector that points in the direction the camera should be facing in 3D space. Normalize that (change it’s length to 1) and you have your first vector. That “Up” vector is needed to do the first vector cross product and “must” point generally above the camera, although it’s not actually “trusted”. In fact, you will often see people cheat and just use Vector3.Up or something that gives a vector that points along the “up” axis. This is technically wrong, but this step here is why the cheat works and why it’s technically wrong:

The “Up” vector and the forward vector we just created are two vectors that we can assume live on a plane which will be our forward/up plane (I’m avoiding using X,Y, and Z here since they change from environment to environment, but if Y is up and Z is forward this would be on they ZY plane. But we might be pitched up (rotated around the X axis) and that “Up” might not actually be pointing above the camera. So, we’ll fix it later. For right now, we just use the vector cross product of these two to give us a 3rd vector that points straight out of this plane (I think you have to normalize both vectors to get the correct answer here, but I’m not certain off the top of my head).

So, now we have our “Up” vector (that we don’t trust) and a trust worthy forward and right facing vector (could be left depending on the order you do the math in but multiply a vector times negative one and it will reverse directions in 3D or 2D). Now we have two trusted vectors that live on the forward-right (ZX for example) plane. And we can use their vector cross product to get a vector that we know is mutually perpendicular and truly points up (above the camera) and then we can throw away our untrusted “Up” vector.

That gives you the 3by3 matrix. The 4th value is just the camera position we were given as an input parameter. And the 4th (w) value of each is 0 if it is a vector/direction and 1 if it is a position. This is in order to make all the math we will be doing later work out correctly. Viola! There’s your 4 by 4 matrix created with LookAt().

Keep in mind that I never know exactly which of these vectors has to be normalized to make the math come out right because I generally use GLM or whatever math is built in to what I’m using. I would just normalize them all to start out with until you know which ones do not necessarily have to be normalized.

Also, the identity matrix will give you a “LookAt” matrix that points straight down the forward (Z?) axis with no pitch, yaw, or roll. So, you could just start with an identity matrix instead and re-position and re-orient it. But LookAt is convenient for various things.

The view matrix and the world/object’s matrix are the same thing except every object has it’s own matrix and you only have one view matrix (per camera). Also, (and this is really important), the view matrix is opposite (inverted from) an object’s world matrix. It works exactly backwards from the object’s matrix. So, when working with a view matrix, you have to invert and rotation, translation, etc. that you do to it.

But anyway, that’s how the LookAt() function works. It merely takes 3 input parameters to build a 4 by 4 matrix that holds an orientation and position that match the input parameters by using vector cross product math.

I might also mention that I am a firm believer in never using scale, and to scale equally on all three axes if you do. I believe if the scale is off, you need to go into Blender and fix it and re-import it rather than using mathematical scaling. But there may be times where it makes sense to use it. Still, given a choice, I pretty much never scale anything in code… ever. For one thing, when you’re working with a lot of models, it’s going to be near impossible to get everything to look right together if they are not modeled to scale in the first place. For playing around, it may not matter. But for serious endeavors you want your models modeled to scale so that they just “work” when you import them. I had a scene that I did recently with a lot of objects that were modeled separately and it looks “real” because everything was modeled to the same centimeter scale even though they were all made completely separate from one another.

As a learning exercise, you can create your own LookAt() function and compare it’s output to the LookAt() function built into GLM, or whatever you are using. If yours works just as well, you know you’ve done it right.

Thanks for the detailed answer Beck, it’s much appreciated.

Thankfully I managed to understand my normal matrix doubts. I ended up getting that 3x3 sub matrix and its inverse…transposed it, then send that 3x3 matrix over to the GPU. That worked for me. Took me a bit to write a proper 3x3 inverse function though. I’m currently studying lighting and have a couple of questions, but I’ll open another thread on that topic.

Thanks again Beck.