GL view rotation question

I’m trying to debug something in a GL app and am thinking maybe the problem is an issue with fundamentals.

The initial view orientation is looking down the negative z axis with positive x to the right and positive y up.

I’d like to rotate the view as follows. I’d like to rotate about the X axis first, then about the (original) Y axis, then about the (original) Z axis.

How should I achieve this in OpenGL?

Currently, I’m manually building up a rotation matrix in an array and then using glMultMatrixd. The problem I’m having is that for certain angles, the view doesn’t end up where I’d expect it.

Here’s the code being used to manually construct the rotation matrix. In the below r.x, r.y, and r.z are the rotations about X, Y, and Z in degrees, DTR is PI/180, and T is a representation of the matrix as a 2D array accessed using T[row][col]. T has already been initialized to identity before the below code executes.


   double ca,sa,cb,sb,cg,sg;
   sa = r.z*DTR;
   ca = cos(sa);	
   sa = sin(sa);
   sb = r.y*DTR;
   cb = cos(sb);
   sb = sin(sb);
   sg = r.x*DTR;
   cg = cos(sg);
   sg = sin(sg);
   T[0][0]= ca*cb;
   T[0][1]=ca*sb*sg-sa*cg;
   T[0][2]=ca*sb*cg+sa*sg;
   T[1][0]= sa*cb;
   T[1][1]=sa*sb*sg+ca*cg;
   T[1][2]=sa*sb*cg-ca*sg;
   T[2][0]= -sb;
   T[2][1]=cb*sg;
   T[2][2]=cb*cg;  		

   // ...

   double m[16];
   int c = 0;
   for (int i=0;i<4;i++)
	for (int j=0;j<4;j++)
		m[c++] = WorldToView.T[j][i];
   glMultMatrixd( m );

What would the recommended way be to achieve the desired rotations in OpenGL, other than manually building the matrix as I’m doing now?

As an example of the problem which prompted this: If I attempt to rotate X,Y,Z = { 90, 0, 180 } using the described rotation it works fine. If I instead attempt X,Y,Z = { 90, 0, 90 } I end up with something which doesn’t look correct. What I end up with appears to be reachable using X,Y,Z = { -90, -90, 0 }.

Note that I get equivalent results using the below code.


   glRotated(r.z,0,0,1);
   glRotated(r.y,0,1,0);
   glRotated(r.x,1,0,0);

You are already thinking about the problem area, because you carefully described your desired subsequent rotations as about the “original” axes. But this is not what your code (or at least not the glRotate sequence) is doing.

You can think of your viewpoint as carrying a set of coordinate axes with it. After the first x rotation, your local y axis has been tilted forward (or back). Subsequent “rotations about y” will be about this rotated y axis, not the original y axis.

I.e., after your x rotation, the “local” y axis is (0,1,0), but the “original” y axis is something like (0,sin(pi/2-r.x),cos(pi/2-r.x)). (No guarantees of correctness!) Something like that would be plugged into your second glRotated line. You can see that the “original z” rotation would be even more complicated.

From what I’ve read, I understand that this sort of thing is why people resort to quaternions to model viewing orientations.

Thank you for the reply.

Before seeing that, though, I was manually rotating the camera around and logging out the r.x, r.y, and r.z values being sent to GL. I noticed that they seemed to be opposite to what I would expect. For example, take a rotation about X. A positive rotation about X would be rotating the Y toward the Z axis I thought. But I noticed that when doing this with the camera manually it resulted in a negative rotation about X.

The rotations I’m getting are from a transform out in the world that I want to look at. It struck me that perhaps I need to take the inverse of that transform first and then use the rotation angles.

I made this change and now it seems to be working as expected for all the cases I have.

You are right about the sense of rotation.

It is possible that you are just looking to “undo” your other operations, in which case inversion is just what you want.

Or it could be that you’ve got matrix rows and columns swapped. This creates a transpose matrix, which for rotations, happens to be the inverse matrix also, and inverting it again gives you the rotation you really wanted all along.

The GL matrix layout is like this

GLfloat m[16]
+--              --+         
|  m0  m4  m8  m12 |
|  m1  m5  m9  m13 |
|  m2  m6  m10 m14 |
|  m3  m7  m11 m15 |
+--              --+

The specification notes that this “column major” order is different from the “row major” order of C/C++ two-dimensional arrays. I.e. when using a two-dimensional C/C++ array you need to think of it as m[row][column], not m[column][row]. To avoid potential errors thinking about this, I usually just use m[16] instead of m[4][4].

I think you are correct in that there is a swapping that needs to occur to get it from the 2D array into the GL array. I believe the code I’m using is doing that. I’m assigning from the T[][] array into the GL array using the convention you posted.

I think the problem was that I was looking at a transform and wanting the view to orient to that transform, i.e. be looking down the z axis with the x axis to the right and the y axis up.

To do that I was pulling out the euler angles from the transform and applying them to the view.

This was incorrect, though it worked for some cases. It appears to me that the fix was to just invert the transform first and then pull out the rotation.

When I made that change it seemed to fix it and it works for all the transforms I’ve thrown at it so far.

Sounds like you’ve got it well in hand. If you haven’t already seen it, you might be interested in looking at gluLookAt just for comparison purposes.