Composing static with realtime rendering

Hello,
i’m creating an OpenGL scene where i’d like to mix static raytraced renderings with dynamic 3D meshes, something seen in the adventure games as Grim Fandango, Monkey Island 4, Dark Mirror, etc.
To do so i’ve rendered the background into an image using Blender and the correspondent depth values in another file. For the mesh i’ve used a simple sphere.
As far as i know the logic to composing the 2D and 3D is:

[ul][li]I’ve create a textured quad in OpenGL and rendered without depth test and in ortho mode.[]I’ve to copy the depth image values into the OpenGL depth buffer enabling the depth testing and disabling the color buffer writing[]Disabling the ortho mode i render the sphere using the same camera view used in Blender to create the static background[/ul][/li]The main problem is writing the z-values into the depth buffer. I’ve investigated the possible methods and here’s a list of them:

  • [li]glDrawPixels : the most classic and compatible method but painfully slow using expecially Radeon cards (in my tests)[]WGL_ARB_buffer_region : this extension can be used for this purpose but is present mostly only on NVIDIA cards. Also in an old Quadro2 i’ve have it doesn’t work good (you see nothing on the screen, probably bugged driver)[]WGL_ARB_pbuffer, WGL_ARB_make_current_read : with these extensions (more diffused on 3D cards) you render a pixel buffer and then switch it back to the framebuffer. I still need to experiment with it, i need to know first how to write only the depth buffer in it.[*]write z-values using GL_POINTS elements: reading this thread http://www.opengl.org/discussion_boards/ubb/Forum3/HTML/010024.html i’ve encountered this method that seems fast and well supported. You generate the values using a rendering of 3D primitives that are more accellerated that using the glDrawPixels function

I’ve some problems trying to recreate the same z-values of my depth image rendering these points.
I’ve created an array of points dimensioned as the number or pixels in my image (es. 512 x 512). Then assigned the x,y,z values for each of them:

  
int w=0;
for (int y=0; y<depthImage->h; y++)
{
   for (int x=0; x<depthImage->w; x++, w+=3)
   {
     g_DepthPoints[w]   = (float)x;
     g_DepthPoints[w+1] = (float)y;
     g_DepthPoints[w+2] = -((float)pixel/ 255.0f);
   }
}

pixel is the color value from 0 to 255 of one of the RGB channels (it is a grayscale image so they are identical)

then i render all like this:

  
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );

// set 2d mode
glMatrixMode( GL_MODELVIEW );
glLoadIdentity();
gluOrtho2D(0.0, 512.0, 0.0, 512.0);

glMatrixMode(GL_PROJECTION);
glLoadIdentity();

glEnable(GL_TEXTURE_2D);
glDisable(GL_DEPTH_TEST);
glDisable(GL_LIGHTING);

// render background
	glPushClientAttrib(GL_CLIENT_ALL_ATTRIB_BITS);
glInterleavedArrays( GL_T2F_V3F, 0, g_quadVertices );
glDrawArrays( GL_QUADS, 0, 4 );
glPopClientAttrib();
	
// render points that write the z-values
glEnable(GL_DEPTH_TEST);	
glDisable(GL_TEXTURE_2D);

// disable color drawing
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);

glEnableClientState( GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, g_DepthPoints);
glDrawArrays( GL_POINTS, 0, 512 * 512);

glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);

// reset each matrix and set perspective mode as the one used for creating static renderer
glMatrixMode( GL_MODELVIEW );
glLoadIdentity();

glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(60, 1, 1, 100);

glEnable(GL_LIGHTING);

// move to camera position and rotation (in my case i've not to change anything)

// render 3d meshes

have you any suggesting on how to solve this issue? with this code i’ve a depth buffer written with my values but if i export the depth values i’ve a brighter image than the original one. This because i think the assigned positions of the vertices are not much mathematical logic :wink:

Thanks in advance

Getting the depth buffers from two separate renderers to line up perfectly is probably not a reasonable expectation. Besides, there are better ways to do this.

Take a (potentially simplified) version of the raytraced mesh. Render it, but only as depth. Then render your background “image” (no depth writing). Then render the rest of your scene. This is 100% guarenteed to work.

And, if you want to make re-rendering faster, you can just do a one-time render of the “background” to a depth buffer and copy from it to the real depth buffer as needed. That way, if your scene is complex (maybe it would take 1 second to render), you can still render it. It just takes one second to move from location to location.

I’d say the best way to copy a texture to depth is by using a fragment shader and depth output. Since you’re already doing a fullscreen quad as I understand it it could even be the same shader for both color and depth buffer.

The problem is that this method disables early depth test. Depending on how much you’re shader bound, this can make a big difference, so rendering a simplified scene to the depth buffer migth really be faster than using a pre-rendered depth buffer.

I doubt he’s going to be shader bound. He’s rendering a fullscreen background quad, and then a few objects interleaved with that. He should be fine without fast Z, which he probably wouldn’t have even been using anyway (you must do a seperate pass for that!).