Rotate Monitor

Can’t seem to find any info on a situation where you rotate the monitor say 90 degrees. You would obviously have to rotate the viewport by a corresponding 90 degrees. How do I do this?

There are a number of ways to deal with this. You could put this rotation in the viewing transform (just change your “up” vector). Or after rasterizing the framebuffer, you could transpose the image.

Food for thought: Think about whether you want to resize windows when the monitor is rotated (consider the non-square window case) and/or what you’re going to do in the case of non-square pixels. Unless square windows and square pixels, you’ll probably also need to change the portion of the scene you see through the window and/or the resolution of this window (projection and viewport, respectively) to account for this monitor rotation. This so that you don’t see distortion (squeezing/stretching) in the view you see through your window.

Thanks, for anyone who is interested, this thread also has a link that talks about “up” vectors and how to play around with views.

https://www.opengl.org/discussion_boards/showthread.php/178047-about-gluLookAt-function-and-how-to-rotate-the-camera

From the link the previous post, can someone explain this statement to me?

“Note that these steps correspond to the order in which you specify the desired transformations in your program, not necessarily the order in which the relevant mathematical operations are performed on an object’s vertices. The viewing transformations must precede the modeling transformations in your code, but you can specify the projection and viewport transformations at any point before drawing occurs. Figure 3-2 shows the order in which these operations occur on your computer.”

This is in relation to figures 3-1 and 3-2.

Sure. Here’s OpenGL’s transformation ordering:

PVM * v[SUB]object[/SUB] = v[SUB]clip[/SUB]

where:

P = Projection transform
V = Viewing transform
M = Modeling transform
V*M = MODELVIEW transform

Let’s suppose you have several modeling transforms needed to position an object in your scene. That is:

PV(M[SUB]3[/SUB]*M[SUB]2[/SUB]*M[SUB]1[/SUB]) * v[SUB]object[/SUB] = v[SUB]clip[/SUB]

As you can see, the order that the transforms are conceptually applied to the object is M[SUB]1[/SUB], M[SUB]2[/SUB], M[SUB]3[/SUB], V, P.

However, if you’re using legacy OpenGL (which has its own built-in MODELVIEW and PROJECTION transform state) to build the transforms, the order that you specify the transforms that get multiplied onto the MODELVIEW transform is: V, M[SUB]3[/SUB], M[SUB]2[/SUB], M[SUB]1[/SUB]. (NOTE: PROJECTION (P) has its own separate transform state.)

As you can see, the order of the component MODELVIEW transforms is reversed.

This went a bit over my head to be honest. I obviously don’t have enough pieces to the puzzle to understand what you are saying. Why for example did you mention legacy OpenGL? Is that article that I mentioned talking about an old OpenGL?

vobject is the 4 dimensional homogenous vector of one of the points in our object? Why is it the last operation in your example when in the article example everything starts off with it as per figure 3-2.

Currently my understanding of that statement is basically that the order in which you specify the various transformation matricies is different to the order in which OpenGL sends commands to the video card. Hopefully at least this last statement is correct and on the right track although I don’t yet understand the importance/implication of this yet.

The linked forum post and the link to the red book in that post both use legacy OpenGL (meaning: features that don’t exist in 3.2+ core profile). Modern OpenGL doesn’t have glMatrixMode() etc; the application constructs the matrices itself and sends them to the shaders (typically as uniform variables).

The legacy matrix functions (glRotate(), glTranslate(), glMultMatrix() etc) multiply the CTM with a relative transformation, with the CTM on the left and the relative transformation on the right.

If you think in terms of starting with the vertex coordinates in object space and applying a sequence of transformations to those vertices, the right-most transformation (corresponding to the last OpenGL command) is applied first while the left-most transformation (corresponding to the first OpenGL command) is applied last.

Getting back into it after some time off…

Still trying to figure out all the earlier stuff, but also wondering:

What happens with rounding? If you create an object which you then start translating, how does the object not distort over time as you move it around? Or is every operation basically starting fresh from the origin of the object’s coordinates and the latest position?

As for squareness of pixels, I think I’ll not worry about that right now. Maybe later, for the moment, I’m dealing with square pixels only.

[QUOTE=Atomic_Sheep;1289572]
What happens with rounding? If you create an object which you then start translating, how does the object not distort over time as you move it around? Or is every operation basically starting fresh from the origin of the object’s coordinates and the latest position?[/QUOTE]
If you accumulate deltas over time, you accumulate rounding errors.

But if you’re just moving something, you’d accumulate the translation vector, and generate the matrix from the vector each frame. So while the position might not be exactly correct, you won’t get distortion. Actually, you shouldn’t get distortion from accumulating translation matrices, because all of the other components will be 0 or 1 exactly, and the calculations for those components won’t have any rounding error.

If you’re accumulating matrices (e.g. flight control model), they can “decay”, meaning that the axes will deviate from being perpendicular and of unit length. With double-precision values, this takes a very long time to become an issue. With single-precision it happens more quickly. In the era of 8-bit and 16-bit microcomputers, when the calculations were often done using fixed-point arithmetic, lookup tables, and logarithms for multiplication/division, it would happen very quickly, and the matrices would need to be renormalised frequently.

[QUOTE=GClements;1289576]If you accumulate deltas over time, you accumulate rounding errors.

But if you’re just moving something, you’d accumulate the translation vector, and generate the matrix from the vector each frame. So while the position might not be exactly correct, you won’t get distortion. Actually, you shouldn’t get distortion from accumulating translation matrices, because all of the other components will be 0 or 1 exactly, and the calculations for those components won’t have any rounding error.[/QUOTE]

I think I follow. My needs are quite simple so looks like I can get away with just accumulating the translation vector.

P.S. Had a think about the earlier stuff… I think I’ll go with the fixed pipeline method because my application is very simple and I don’t really see a point learning the new methods for my intended application. The only question that I have is whether fixed pipeline has anti-aliasing or is this a newer thing that can only be implemented with programmable pipeline?

Both alpha-based anti-aliasing (GL_POLYGON_SMOOTH) and MSAA can be used with either the fixed-function pipeline or shaders.

Thanks, getting back into it again after new year break.