Performance w/o aux buffers?

Here’s the scenario - I’m getting pixel and depth values from an external rendering engine (these updates are infrequent and not performance-critical). Much more frequently (ideally > 5 Hz) I’m drawing geometric objects into the scene. These objects will be at different depths and are not necessarily in front of the drawn pixels, so I can’t treat the pixels as a background.

What I don’t understand is how best to restore previously-hidden parts of my scene when redrawing the moving objects.

glGetIntegerv(GL_AUX_BUFFERS) tells me that I don’t have any AUX buffers, so right now I’m keeping a copy of my colour and depth buffers in system memory. Before each update of the moving objects I copy the buffers back in. Performance is terrible: ~1-2 sec to redraw.

(I can get some improvement via glScissor, but that can be insignificant when I’ve got two objects at opposite corners - the rectangular bounding box is close to the whole window. Would glStencil… work for this?)

I get the feeling that I’ve missed something basic here… suggestions?

If you are using readîxels and draw pixels, it is normal to be slow on most consumer cards.

It is better to glcopytexsubimage your background to a texture, then draw it as a fullscreen quads. I am not sure about the depth buffer though, you will probably need GL_DEPTH textures or so.
Even better, use render to texture (RTT) with pbuffers to avoid the copy.

There have been quite a lot of discussions about these topics lately, you may find a lot in this forum or the advanced one.

Thanks! For a first step I’ve switched to using glCopypixels with a pBuffer instead of glDrawPixels and a system buffer. Now, the pixel copy is fast, < 10 millisec, but copying the 512x512x16bit depth map takes about 700 millisec. That’s even after I disabled depth test, which saved me ~300 millisec.

I’m still looking in to using a textured quad, but I’m not sure how to handle the depth buffer. The background scene is CT heart data, so it’s very irregular and my objects are embedded in it, so I have to get the depth right at every point. Am I going to need one vertex per pixel to do this right? I could probably subsample, but I wouldn’t want to push that too far… Would RTT solve this issue?

So far I haven’t found suitable samples or discussions. Do you have any links or other references to suggest? This is my first project in OpenGL…

BTW - unregistered guest pip is now registered as hotte

Would the following work?

Use 2 pBuffers, p1 for the externally-rendered depth-buffered scene, p2 for the geometric objects.

Whenever I get an update from my renderer, I redraw both color and depth buffers into p1 - this is seldom enough that the performance doesn’t matter.

When the geometric objects need to be moved, I redraw them all into p2 with depth_test enabled and depth mask 1. That way the relative depths among these objects are handled correctly.

To compose the scene, I set depth mask to 0 on the framebuffer and copy just the color buffer from p1. Then, with depth_test enabled, but depth mask still 0, I copy p2 to the framebuffer.

My intention is that the depths of the geometric objects would not alter the depth buffer in the frame buffer, but the depth-test would still be used to decide whether a given pixel gets rendered. That way I wouldn’t have to re-copy the depth buffer every time.

Does this sound like a viable approach??