Transformation from a unit cube to the frustum in world-space

This is a hard question to word correctly, so please forgive my terminology.

Basically, for purposes of volumetric environmental particle effects – like rain, snow, patchy fog, etc. I’m working on a way to populate just the view frustum with particles. The idea being that as these particles fall out of the frustum ( be it for wind, or gravity, etc ) they will be respawned on the mirror side of the frustum to continue. So if a particle leaves the bottom, it will be respawned on the top, etc. This ought to allow for sweeping of the camera and the appearance of continuous volume of particles.

For purposes of mathmatical simplicity, it seems best to me to let the particles exist in a unit cube, which would be projected into world-space to correspond to the view frustum. Collision detection with the edges of the cube would be trivial, and projection to world space ought to be simple enough too.

This is what I’m getting when I project a ( large, for testing ) box using the projection matrix and worldview matrix of a camera ( which the current camera is looking at )

This is what I’m looking to get:

My proof-of-concept code below is drawing a blue sphere where the “rainview” camera is located, with a blue line in the direction that camera is facing. Then it draws a pink box where the camera is, using the concatenation of the inverse projection and modelview matrices. The pink box correctly positions itself and faces the right direction, it just happens to be a box, not frustum.

void displayRainViewCamera( void )
{
        /*
                Only draw if current camera isn't the "rainview" camera
        */
        if ( CameraCollection::currentCamera()->name() != CAMERA_RAINVIEW )
        {
                
                Camera *rv = CameraCollection::cameraNamed( CAMERA_RAINVIEW );

                vec3 pos( rv->position() ), look( rv->looking());
        
                /*
                        Draw a sphere at its position, and a line going
                        in the direction the camera's facing
                */

                glPushMatrix();
                glTranslatef( pos.x, pos.y, pos.z );
                glColor3f( 0.75, 0.75, 1 );
                glutSolidSphere( 2, 10, 10 );

                        glBegin( GL_LINES );
                        
                                glVertex3f( 0,0,0 );
                                glVertex3fv( (look * 10 ).v ); 
                        
                        glEnd();

                glPopMatrix();
                
                /*
                        Make a large cube. I will use a unit cube, but for 
                        now I'm making a cube of size 40, with z=0 at the near plane.
                */
                float size = 20;
                vec3 cube[8] = 
                {
                        vec3( -size, -size, 0 ),
                        vec3( size, -size, 0 ),
                        vec3( size, size, 0 ),
                        vec3( -size, size, 0 ),

                        vec3( -size, -size, size ),
                        vec3( size, -size, size ),
                        vec3( size, size, size ),
                        vec3( -size, size, size )
                };
                
                /*
                        Get the modelview and projection matrices from
                        the "rainview" camera ( this is *not* what's being used by the
                        current camera ). Get their inverses, and concatenate.
                        
                        NOTE: My cameras use an infinite projection matrix for 
                        stencil shadows. The method projectionWithFarPlane() gives
                        you a projection matrix with a "fake" far plane, which
                        I've set to 500 elsewhere.
                */
                mat4 projection( rv->projectionWithFarPlane() ),
                         modelview( rv->modelview() ),
                         projectionInverse( projection.inverse() ),
                         modelviewInverse( modelview.inverse() );

                mat4 m = modelviewInverse * projectionInverse;
                
                /*
                        Transform the cube's points
                */
                for ( int i = 0; i < 8; i++ )
                {
                        cube[i] = m * cube[i];
                }
                
                /*
                        Draw it as lines
                */
                glColor3f( 1, 0.75, 0.75 );
                glBegin( GL_LINE_LOOP );
        
                        glVertex3fv( cube[0] );
                        glVertex3fv( cube[1] );
                        glVertex3fv( cube[2] );
                        glVertex3fv( cube[3] );
        
                glEnd();

                glBegin( GL_LINE_LOOP );
        
                        glVertex3fv( cube[4] );
                        glVertex3fv( cube[5] );
                        glVertex3fv( cube[6] );
                        glVertex3fv( cube[7] );
        
                glEnd();

                glBegin( GL_LINES );
        
                        glVertex3fv( cube[0] );
                        glVertex3fv( cube[4] );

                        glVertex3fv( cube[1] );
                        glVertex3fv( cube[5] );

                        glVertex3fv( cube[2] );
                        glVertex3fv( cube[6] );
        
                        glVertex3fv( cube[3] );
                        glVertex3fv( cube[7] );

                glEnd();
                
        }

The only thing I can think of is that you can’t just get a matrix to do this. Instead, I’ve got to project not a box, but a pyramid shape where the front face corresponds to screen resolution and the rear face dimensions are scaled according to (farPlane - nearPlane) by the field of view.

Anybody have any advice for me?

EDIT:
With further thought, I’ve decided to take the approach of creating not a unit cube, but rather a frustum in eye space ( e.g., aligned along x, y and z ), and then projecting that to world space using the inverse of the camera’s projection and modelview matrices. This ought to get around the lack of a 1/w perspective transformation.

It seems to me that it ought to allow for easy random population and collision detection, since it would be just a matter of similar triangles to detect when a particle leaves.

I’m going to write up a test to see if this is viable, but still, I’m curious if there’s a simpler way…

Did I just word it poorly? I can’t believe that nobody here’s tried to solve this problem before.

So, I decided that I have to make a frustum in eye-space and transform it back to world space using the inverse of the camera’s modelview.

I’ve put together a trapezoidal shape in eye-space which corellates exactly to the frustum in world space for all valid fov, but which fails if the aspect ratio is not 1. That seems encouraging, but I can’t figure out why it’s breaking.

Could somebody look over the following code and tell me where I’m being an idiot?

/*
	Get camera params
*/
EyeSpaceFrustum::EyeSpaceFrustum( Camera *camera )
   :_near( camera->nearPlane() ), _far( camera->farPlane() ),
    _fov( camera->FOV() ), _nearWidth( camera->screenWidth() ), 
    _nearHeight( camera->screenHeight() )
{}

<snip>

void EyeSpaceFrustum::createBox( vec4 points[8] )
{
	float aspect = _nearWidth / _nearHeight;
	float hNearWidth = 1.0 / 2.0,
	      hNearHeight = ( 1.0 / aspect ) / 2.0;
	
	
	printf( "aspect: %f
hNearWidth: %f
hNearHeight: %f
", aspect, hNearWidth, hNearHeight );
	
	/*
		Basic trig, tan(theta) = opposite / adjacent
		
		tan( theta ) = width / focalLength
		focalLength = width / tan( theta )
	*/
	
	float tanHFov = tanf( _fov * 0.5f * DEG2RAD );
	float focalLength = hNearWidth / tanHFov;
	
	float nearDistance = focalLength,
	farDistance = focalLength + (_far - _near),
	farOverNear = farDistance / nearDistance;
	
	printf( "fov: %f
", _fov );
	
	printf( "


" );
	
	points[0] = vec4( -hNearWidth, hNearHeight / aspect, -_near, 1 );
	points[1] = vec4( hNearWidth, hNearHeight / aspect, -_near, 1 );
	points[2] = vec4( hNearWidth, -hNearHeight / aspect, -_near, 1 );
	points[3] = vec4( -hNearWidth, -hNearHeight / aspect, -_near, 1 );
	
	points[4] = vec4( -hNearWidth * farOverNear, hNearHeight * farOverNear / aspect, -_far, 1 );
	points[5] = vec4( hNearWidth * farOverNear, hNearHeight * farOverNear / aspect, -_far, 1 );
	points[6] = vec4( hNearWidth * farOverNear, -hNearHeight * farOverNear / aspect, -_far, 1 );
	points[7] = vec4( -hNearWidth * farOverNear, -hNearHeight * farOverNear / aspect, -_far, 1 );
}

And to draw the frustum:

	
/*
	Draw the camera's frustum. Furst, create an EyeSpaceFrustum 
	and hand it the rainview camera, so it can get the camera's
	properties.
*/

EyeSpaceFrustum esf( camera );

vec4 frustum[8];
esf.createBox( frustum );

/*
	Multiply the box's points by the inverse modelview,
	to project it into world space
*/
mat4 modelviewInverse( camera->modelview().inverse() );

for ( int i = 0; i < 8; i++ )
{
	frustum[i] = modelviewInverse * frustum[i];
}


/*
	Draw the frustum, with the near color as pink 
	and the far color ay cyan, smooth blended.
*/
vec3 nearColor( 1, 0.5, 0.5 ),
	 farColor( 1, 0, 1 );

glShadeModel( GL_SMOOTH );

// near
glColor3fv( nearColor );
glBegin( GL_LINE_LOOP );

	glVertex4fv( frustum[0] );
	glVertex4fv( frustum[1] );
	glVertex4fv( frustum[2] );
	glVertex4fv( frustum[3] );

glEnd();

// far
glColor3fv( farColor );
glBegin( GL_LINE_LOOP );

	glVertex4fv( frustum[4] );
	glVertex4fv( frustum[5] );
	glVertex4fv( frustum[6] );
	glVertex4fv( frustum[7] );

glEnd();

// connectors
glBegin( GL_LINES );

	glColor3fv( nearColor );
	glVertex4fv( frustum[0] );
	glColor3fv( farColor );
	glVertex4fv( frustum[4] );

	glColor3fv( nearColor );
	glVertex4fv( frustum[1] );
	glColor3fv( farColor );
	glVertex4fv( frustum[5] );

	glColor3fv( nearColor );
	glVertex4fv( frustum[2] );
	glColor3fv( farColor );
	glVertex4fv( frustum[6] );

	glColor3fv( nearColor );
	glVertex4fv( frustum[3] );
	glColor3fv( farColor );
	glVertex4fv( frustum[7] );

glEnd();

glShadeModel( GL_FLAT );

EDIT: My code’s formatting got mangled something fierce. I don’t know why I can’t just post tab indented code.

Solved!

And, on the off-chance that somebody needs to do this themselves, someday:

void EyeSpaceFrustum::createBox( vec4 points[8] )
{

	/*
		Basic trig, tan(theta) = opposite / adjacent

		theta = 1/2 fov
		adjacent = near or far distance
		opposite = plane width, or height
	*/

	float tanHFov = tanf( _fov * 0.5f * DEG2RAD );
	float aspect = _nearWidth / _nearHeight;
	
	float hNearWidth = tanHFov * _near * aspect,
	      hNearHeight = tanHFov * _near,
		  hFarWidth = tanHFov * _far * aspect,
		  hFarHeight = tanHFov * _far;
		  
	points[0] = vec4( -hNearWidth, hNearHeight, -_near, 1 );
	points[1] = vec4( hNearWidth, hNearHeight, -_near, 1 );
	points[2] = vec4( hNearWidth, -hNearHeight, -_near, 1 );
	points[3] = vec4( -hNearWidth, -hNearHeight, -_near, 1 );

	points[4] = vec4( -hFarWidth, hFarHeight, -_far, 1 );
	points[5] = vec4( hFarWidth, hFarHeight, -_far, 1 );
	points[6] = vec4( hFarWidth, -hFarHeight, -_far, 1 );
	points[7] = vec4( -hFarWidth, -hFarHeight, -_far, 1 );
}

The trouble was this: I was under the impression that the near plane rested at the camera position, and the focal length extended behind the camera. When, in fact, the focal point is the camera position itself, and the far plane is in front. In retrospect, this makes a lot of sense, but hey, none of the books I consulted ( The Red Book, The OpenGL Superbible, and Game Programming with OpenGL ) went into sufficient detail as to the actual subtleties of projection.