Efficent Depth Buffer Render

I have done glCopyTexImage2D before, with the color image. This saved getting the image to the CPU.

I want to simply render the depth buffer image.
I was going to draw, read the depth buffer, save to texture, and render on a quad.

Is there a better way to do this other than:
render
glReadPixels(blah,blah, GL_DEPTH_COMPONENT, blah);
glTexImage2D(params);
render a poly with the downloaded depth image.

I was looking for a way to do all this in texmem without needing to go down to CPU.

TM

You can also use glCopyTexImage2D with the depth_component format.

If you have to do this a lot, it’s better to initialize a depth component texture (you can initialize it with NULL as texturedata to obtain an empty depth texture) and use glCopyTexSubImage2D instead of glCopyTexImage2D.

Greetz,
Nico

I had wanted to, but the function doesn’t provide an obvious way to state the depth buffer. The man page says it will use the current GL_READ_BUFFER. How do I change that from RGB to depth and back again?

TM

If you copy something to a depth-component-texture, it automatically copies from the depth-buffer.

However, if you are using ATI hardware, be warned, that there is a bug in their drivers.

If you copy to a depth-texture, it always fills the complete texture. That means, if you have a 10241024 sized texture and want to copy 1024768 pixels to it (the whole framebuffer), it will actually copy 1024*1024 pixels to that texture.

Since this is obviously not really possible, it will slow down to less than 1 frame per second.

Jan.

Sorry to be dense, but I simply don’t understand the mechanics of doing this.

It was said:

If you have to do this a lot, it’s better to initialize a depth component texture (you can initialize it with NULL as texturedata to obtain an empty depth texture) and use glCopyTexSubImage2D instead of glCopyTexImage2D.

How on earth do I initialize a depth component texture?

glTexImage2D does not seem to provide a place to do this. If I copy the RGB buffer, I create a texture that looks like:

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixels);

I wanted to do GL_DEPTH_COMPONENT somewhere here, like one of the GL_RGBA’s, but the Red Book says you can’t do that. Experimenting shows that it doesn’t work as well.

When ussing glCopyTexSubImage2D, I did
(GL_TEXTURE_2D, 0,0,0,0,0,width,height);
I know I did the whole thing, but I am trying for simple success first.

I am also confused as to what to put for the format and size of the depth buffer. GL_FLOAT should be the type, right?

The end goal is to display the depth buffer, to the screen, so I can see it. Efficiency is not terribly important, but it should be at least somewhat interactive.

Can anyone clarify the mechanics of this for me?
I keep getting non-zero data, but it doesn’t play right.

For playing, I was simply binding the texture, assigning the texture coords to a white polygon that covered the screen, and drawing it.
This works fine for RGB. Is there something else I have to do if the original texture represents the depth buffer? I need to be able to see it. Greyscale is fine. I am worried that if the texture is known as a depth texture, that it will be invisible in RGB space.

Thanks,

TM