Projection to a non-perpendicular view plane

I’m working on a system that requires me to orient the display screen at an arbitrary angle to the user, but need to create a display that is geometrically correct from the viewers standpoint. The easiest way to think of this is to imagine a wall with a window fixed in it, allowing the user to look into the room on the other side. As the user changes position (i.e., moves about on their side of the wall), the window is no longer perpendicular to the users view. However, the scene on the other side of the window retains its correct geometric view because it is simply transmitted through the window, regardless of the orientation of the glass to the user.

Moving to OpenGL (well, 3D graphics in general), the analogous case would be to have a monitor be the window into the room, and the contents be our model. By tracking the users head/eye position relative to the monitor, we can calculate the orientation and position of the monitor relative to the user. However, the standard methods of creating the projection matrix all assume a viewing device perpendicular to the viewer. glFrustum gives you a little leeway in creating the projection matrix by allowing off-center viewports, but it does not give anywhere near the flexibility required to create a non-perpendicular view plane.

I’m not looking for a quick cheat to get this running - one simple, though somewhat backwards solution is to generate a view perpendicular to the user and render it to a pbuffer, and then apply this as a texture to a suitable object. The suitable object in this case would be an array of vertices that correspond to each pixel on the screen, rotated to match the location of the screen, and then re-projected onto the pbuffer viewport to get the appropriate (x,y) texture coordinates. The problem here is that you are forcing the scene to be interpolated through texture processing to create each pixel value, rather than creating the proper projection matrix in the first place so that each pixel is calculated directly from the scene. The artifacts from this process are especially evident for a stereoscopic image, which is what I am trying to create.

So, I’m grinding my way through Foley et al. to work out how to use an arbitrary view plane, but figured that someone else may know a source where this has been tackled in all its gory details. If you are familiar with the formulation, or have a reference I’d very much appreciate the help.

Thanks,
XT

This is easy. Your difficulty (and it is shared by MANY), is that you assume that the view vector cannot be perpendicular to the viewing plane. However, reguardless of where the eye is w.r.t. the window on the wall, there is always a line towards the wall which is perpendicular to the imaging plane. Even if it does not fall within the window. Using this way of thinking about the problem the view vector is that line and the frustum is an asymmetric frustum relative to that line (the line intersecting at 0,0 on the near clip). The only thing that starts to go wrong it is z fog but that is because it is wrong by design. You may want to play around with the fog depending on the projected fog radius or use some kind of radial fog if you need high quality fog.

[This message has been edited by dorbie (edited 04-10-2002).]

To add to dorbie’s explanation: so you can use glFrustum with non-symmetric viewing volumes like glFrustum( 4, 6, -1, 2, 1, 100).
This will project a view onto the screen as seen by an observer being positioned to the left and slightly below the middle of the monitor (assuming that the viewing direction in the ‘normal’ position is the default: towards negative z with positive x going right and positive y going up).

HTH

Jean-Marc.

Originally posted by dorbie:
[b]This is easy. Your difficulty (and it is shared by MANY), is that you assume that the view vector cannot be perpendicular to the viewing plane. However, reguardless of where the eye is w.r.t. the window on the wall, there is always a line towards the wall which is perpendicular to the imaging plane. Even if it does not fall within the window. Using this way of thinking about the problem the view vector is that line and the frustum is an asymmetric frustum relative to that line (the line intersecting at 0,0 on the near clip). The only thing that starts to go wrong it is z fog but that is because it is wrong by design. You may want to play around with the fog depending on the projected fog radius or use some kind of radial fog if you need high quality fog.

[This message has been edited by dorbie (edited 04-10-2002).][/b]

I thought it would be this easy, too, but it turns out that mapping to the display device is the hard part. Let me explain a bit further why the asymetric frustum approach isn’t working here…

The difficulty comes not with setting up the viewing vector, but in setting it up in such a way as to map it appropriately to the pixels on the physical display device. Take the simple case where you have, say, dropped below the display and are looking up at it: the (physical) pixels on the screen now present a keystoned image to the user - the pixels at the top of the monitor are closer together (measured in pixels/degree of the viewing volume) than those at the bottom - creating a physical display that is effectively no longer rectangular from the users standpoint, nor does it have uniform density of display elements.

glFrustum can be used to create a certain type of asymetric view, but not this type. The projection in all glFrustum calculations still assumes that the viewing plane (and consequently the viewing device) has a constant z value when the perspective correction calculation is performed deeper in the pipeline, and the actual pixels are rasterized. This requires the display device to be perpendicular to the users viewing vector, which is precisely what we don’t have.

From a theoretical standpoint, the easiest approach is to modify the rendering pipeline so that the mapping from clipping space into device space is performed against a non-rectangular set of display elements…however, this is painfully inefficient and throws away all the optimizations designed into the graphics hardware. What would be ideal is to modify the shape of the viewing volume so that when it is projected onto the physical device, the effective viewing volume is once again rectangular. Essentially, this is applying the inverse of the (physical) screen rotation relative to the user to the viewing volume, which is the formulation that I’m looking for/trying to derive.

I appreciate your comments, and any other insights you might have.

Thanks,
XT

Assume that the user IS looking straight into the wall, and is seeing the screen out of the corner of his eye. Thus, you get something like this:

|
+
| .

  • .
    | . .
    | . .
    |<-------U
    |

Now, it should be able to set up the projection matrix so that objects get appropriately projected to the screen based on this location, and still have a uniform Z depth along the perpendicular-to-the-wall vector. I believe you do this by projecting onto the plane perpendicular to the U vector, but then offsetting which projected coordinates actually fall in the viewing window.

[This message has been edited by jwatte (edited 04-10-2002).]

The plane of the display defines the view vector not the relationship of the viewer to the screen. Once you have the view vector and plane of projection, the eye position relative to the window defines the frustum.

It’s very simple, try to visualize it looking at the imaginary plane of the screen not the screen itself.

P.S.

great ascii art jwatte. One other thing, this is the projection for the display, the view vector is a means to an end, it generates the correct pixels in the correct place. When you rotate your head keeping the eye in the same place to look at the display it does not affect what those pixels should be or the OpenGL view vector. Only the plane of the display affects the view vector w.r.t. the OpenGL eyespace.

[This message has been edited by dorbie (edited 04-10-2002).]

Originally posted by xtwombly:
The difficulty comes not with setting up the viewing vector, but in setting it up in such a way as to map it appropriately to the pixels on the physical display device. Take the simple case where you have, say, dropped below the display and are looking up at it: the (physical) pixels on the screen now present a keystoned image to the user - the pixels at the top of the monitor are closer together (measured in pixels/degree of the viewing volume) than those at the bottom - creating a physical display that is effectively no longer rectangular from the users standpoint, nor does it have uniform density of display elements.

XT,

It looks to me that the ‘distortion’ of the display you talk about is taken care of by the viewer actually not looking perpendicular to the screen, so you do NOT have to take that part into account. Or am I missing something?
The window that you’re drawing to is always rectangular, but depending on the position of a person looking to the screen that rectangle is distorted. I would think that distortion complements the asymmertic glFrustum and creates a completely accurate image. As long as the actual position of the viewer corresponds to the assumed position in the used projection.

Jean-Marc.

JML, Dorbie, and JWatte

Thanks for the help here - You’ve convinced me that glFrustum will do this after all. The mistake I was making was continuing with the notion that the camera viewpoint and the users (physical) viewpoint were one in the same…a fairly typical description of the camera, but erroneous in this instance. As you’ve pointed out, orienting the camera to be perpendicular with the monitor plane and calculating the spatial offset of the screen for the frustum gives you the proper perspective, and it is immaterial that the user rotates their own eyes to look directly at the skewed screen. When I had originally tried using the frustum it was with the appropriate offsets in the users (physical) eye space (e.g., where is the monitor with respect to my eye), but failed to consider the additional rotation of the camera eye space and subsequent displacement of the screen.

Stereo seperation is now a bit more effort - not in theory, but in practice the errors in knowing the users eye location and recalculating the location of the viewing frustum seem to be causing some difficulties. A slight rotational error in the viewpoint can be handled visually, but I can feel the eyestrain when moving far off the primary viewing axis of the display. But, this is another problem for another thread P

Your help is much appreciated,
XT