Is glCopyPixels affected by depth_test?

Can depth_test be used to control the output from glDrawPixels or glCopyPixels?

It didn’t seem to when I tried it, but I wasn’t sure whether it was supposed to or not…

I’m not 100% sure about this, but I believe glDrawPixels and glCopyPixels does a straight copy from video memory to system memory. Therefore, there is not depth testing (or testing of any sort).

don’t quote me on that though.

Wrong, depending on the format the depth is that of the glRasterPos.

More on this then please…

you would set your raster position before a glDrawPixels and then all subsequently drawn pixels would be at that depth and depth testing would apply to them?

A pixel generated by glDrawPixels travels the OpenGL rendering pipeline?

Can depth_test be used to control the output from glDrawPixels or glCopyPixels?
Yes, the depth test interacts with these functions.

Many technical questions like this can be answered by carefully reading the function’s associated spec. The Blue Book is an excellent reference:

http://www.parallab.uib.no/SGI_bookshelves/SGI_Developer/books/OpenGL_RM/sgi_html/b k02.html

Thanks! I had looked over the spec, but apparently I missed the relevant line:

These pixel fragments are then treated just like the fragments generated by rasterizing points, lines, or polygons. Texture mapping, fog, and all the fragment operations are applied before the fragments are written to the frame buffer.

:wink: One sentence too late, that means texturing and stuff affects it too but the depth is explained in
The resulting indices or RGBA colors are then converted to fragments by attaching the current raster position z coordinate and texture coordinates to each pixel, then assigning window coordinates (xr + i , yr + j), where (xr , yr) is the current raster position, and the pixel was the ith pixel in the jth row.

Thanks, however I do still have a related problem. I’m having trouble copying depths from a pBuffer to the frame buffer. I can copy the colour buffer without any trouble, but when if I read back the depths, they’re all unchanged. (I’ve verified that the pBuffer actually contains depth data, it just doesn’t get copied to the frame buffer.)

Here’s the snippet where it copies the colour and then depth:

	ret = Wgl.wglMakeContextCurrentARB(wglMakeContextCurrentARB, GLDC, pBufferDC, GLContext);
	Gl.glEnable(Gl.GL_DEPTH_TEST);
	Gl.glDepthMask(0);
	Gl.glColorMask(1,1,1,1);
	Gl.glCopyPixels(0, 0, width, height, Gl.GL_COLOR);

	Gl.glFlush();

	if (doDepth)
	{
		Gl.glDisable(Gl.GL_DEPTH_TEST);
		Gl.glColorMask(0,0,0,0);
		Gl.glDepthMask(1);
		Gl.glCopyPixels(0, 0, width, height, Gl.GL_DEPTH);
	}
	Gl.glFinish();
  

Do not glDisable(GL_DEPTH_TEST);
Disabling depth test disables depth writes, too.
If you want to avoid any depth rejection but still write depth, use glDepthFunc(GL_ALWAYS) and leave the depth test enabled.

Btw, this applies to primitive rendering, too, not just Copy/DrawPixels.

Thanks! It’s copying ok now, there’s just one more thing… The depth test isn’t behaving quite the way I expected when I copy my image from the pbuffer to the frame buffer.

When I use ReadPixels to look at the depth buffers of the frame buffer and the pBuffer, the values are what I expected. However, when I copy pixels from the pBuffer into the frame buffer, there seems to be a range problem. The pBuffer depths appear to be compressed into a narrow range around the middle depth within my framebuffer scene.

How does the depth-test comparison work? Are the depth values it’s comparing different from the values that I see from glReadPixels?

Ok, I think I’ve figured out my problem - I didn’t realize that when copying the color buffer it will just use the current raster pos for the z-value, not the associated depth buffer. (Yeah, I know, RTFM! )

But now I’m stuck. How can I efficiently copy a color buffer from a pBuffer to the frame buffer, respecting the depth values already in the frame buffer, but not overwriting them…

I guess I can set the current raster pos indivdually for each pixel I copy, but I expect performance will be significantly degraded.

I was wondering if there’s anything I could do along the lines of generating a stencil buffer from the depth buffer, but I’m having a hard time getting my head around glStencilOp… my brain hurts.

How can I efficiently copy a color buffer from a pBuffer to the frame buffer, respecting the depth values already in the frame buffer, but not overwriting them…
Use CopyTexSubImage2D() with your Pbuffer to create a texture, then render it to the frame buffer with a quad, keeping depth writes disabled. Better still, use a RTT (render-to-texture) Pbuffer with the procedure above, and forgo the CopyTexSubImage() step.

I’m having a bit of trouble understanding the first part of your answer (I’m definitely still a newbie - especially when it comes to textures).

I’m not certain what you mean by “render it to the framebuffer with a quad”. If it’s a 2d texture, wouldn’t the shape that I render the texture onto have to reflect the depth values from my pBuffer? Is there a way to generate the necessary shape from the pBuffer depth buffer?

The geometric models I’m drawing are 3 pipe-like shapes representing catheters. They can overlap themselves and each other, and they are to be embedded into the framebuffer scene (CT image).

I don’t draw the models directly into the framebuffer because if the depth buffer is locked, then overlap among my models is not handled (managing draw-order would be complex). If the depth buffer is enabled, then I need to restore the original framebuffer depths whenever I move the models, which is a huge performance hit.

Right now, I’ve got a work-around that manually compares depth buffer values from the pBuffer and framebuffer and calls glCopyPixels where necessary. Since reading the depth buffer seems to be MUCH faster than copying it, performance is ok at 15-20 fps on my target platform, but still an anemic 2 fps on my development system.

Also, are there any RTT docs on-line? I know nothing about it, so I’d want something introductory. (Is NeHe the best place to look?)

If the depth buffer is enabled, then I need to restore the original framebuffer depths whenever I move the models, which is a huge performance hit.

You should have said that in the first question!
What you need is a fast way to restore the contents of color and depth buffer of a static scene.
Voila, there is WGL_ARB_buffer_region exactly for that purpose.
The algorithm is simple.

  • Create a buffer region handle for color and depth.
    Start:
  • Draw your static scene into color back buffer and depth buffer ONCE!
  • Save the color and depth buffer data with the SaveBufferRegion call.
    Loop:
  • Draw your catheters with full depth test and everything you need.
  • Swap the image
  • Restore the static image with RestoreBufferRegion. (An optimized restore would only repair the region which you changed in the drawing step, make sure you have a swap_copy pixelformat then, otherwise just restore the whole window.)
  • Goto Loop.
  • On any window expose or resize message, goto start. That is, do not assume pixels underneath an overlapping window are defined.

This is fast, because the user cannot access the buffer region data (opposed to ReadPixels). Buffer data remains in the native hardware format and on the video board if there’s enough video RAM left.

Extension is here:
http://oss.sgi.com/projects/ogl-sample/registry/ARB/wgl_buffer_region.txt

(There is another one called GL_KTX_buffer_region, which does the same but it’s ill defined.)

Perfect!
Thanks Relic!
:slight_smile: