Copying Depth Buffer To Texture.

So first I create my depth texture using…

glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, m_width, m_height, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, NULL);

then I copy a section of the depth buffer into the texture using…

glCopyTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, m_textureOffset_x, m_textureOffset_y, m_width, m_height);

Everything works fine while m_textureOffset_x and m_textureOffset_y are 0 (in other words the rectangle I am copying starts at 0,0). However, if they are greater than zero I seem to get garbage in my texture all along the bottom and left of the texture in the exact width/height of the offset.

Does anyone know why?

when u specify an offset x y in ur glcopytexsubimage2d , u didnt change the last two parameters in the function( the width and the height ) ? if no, u have to choose an offsetx and offsety that verify the equation :
m_width_of_texture>(width_supplied_to_glCopySub-offsetx)

Umm, ok so I used bad naming convention in my calls. My m_textureOffset_x and m_textureOffset_y are actually the 5th and 6th parameters to the glCopyTexSubImage2D call and the OpenGL spec refers to them as x and y (while the 3rd and 4th parameters are refered to as xoffset and yoffset). However, I still think I am calling it correctly.

The call is defined as:

void glCopyTexSubImage2D (GLenum target, GLint level, GLint xoffset, GLint yoffset, GLint x, GLint y, GLsizei width, GLsizei height).

So I have…

target = GL_TEXTURE_2D
level = 0 (no mipmap)
xoffset = 0 (no offset into the texture)
yoffset = 0 (no offset into the texture)
x = m_textureOffset_x (an x offset into the window coordinates)
y = m_textureOffset_y (a y offset into the window coordinates)
width = m_width (the width of the texture)
height = m_height (the height of the texture)

Since the xoffet and yoffset are both 0, I don’t need to modify my width and height.

The last two parameters specify the width and height of the texture sub image. So as long as the width/height being copied are not larger than the width/height in the glTexImage2D call then everything should be good. In my case, the width/height in the glTexImage2D call are actually the next largest base 2 integer of the width/height from the glCopyTexSubImage2D call.

I should probably also mention that I use the same method and calls to copy a sub image from the color buffer to a texture and the color texture works fine.

It is only the depth buffer that seems to have a problem with the offset in screen coordinates.

Has anyone sucessfully done this before? Do you have example code you used that might be slightly different that I could try?

I don’t have a sample handy but I seem to recall being able to forgo the copy altogether - just bind the depth texture for playback in the next pass.

Can you post an example of glCopyTexSubImage2D call with concrete numbers and offsets different from 0 just to see if understand it correctly. Just to be sure, the two last parameters are the dimensions of the texture subimage not necessarly the full texture dimensions.

Ok sure, basically here is what I am doing. Assume I have an OpenGL context of size 2000 x 2000. Now I want to use that main context for several viewports. In one of those viewports I want to draw a scene and capture the depth information to a texture. Lets assume that viewport’s lower left corner is currently at (700, 300) and that the viewport is of size 550 x 200. Then I make the following calls…

I set up the texture using…

glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, 1024, 256, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, NULL);

Then I draw into the viewport and then I try to copy the depth information using…

glCopyTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 700, 300, 550, 200);

So basically, the texture is size 1024 x 256 in video memory and I copy a subImage into it of size 550x200 starting at the screen coordinates (700,300).

I wish I could attach an image but I can’t… but basically the texture I get looks something like the following…


xxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxx
ttyyyyyyyyyyyyyyyxxxx
ttyyyyyyyyyyyyyyyxxxx
ttyyyyyyyyyyyyyyyxxxx
ttyyyyyyyyyyyyyyyxxxx
tttttttttttttttttxxxx

Where the "x"s are non-initialized data (as I did not copy anything into that area, that is expected), the “y”'s are correct depth information (my scene’s depth info) and the "t"s are the garbage (it is actually a bunch of 0.0s instead of the clear value which is 1.0) I am talking about?? Why are the "t"s there?

if i understand well ,
u want to copy 550* 200 pixels from ur framebuffer to ur depth texture why u are using a texture of 1024256,
try using a texture of 550
200 and setup ur view port as the same size of ur texture
glviewport(700,300,550,200);
glCopyTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 0, 0, 550, 200);

if ur OpenGL implementation doesnt support NPOT texture( non power of two ) use POT textures.
if this doesnt help paste ur code and i will try to test it

Abdallah DIB, it looks like PickleWorld does not want or can not use NPOT textures, that is why he is using a 1024x256 texture.

Right now, I have no idea why it is not working. The way you are doing it looks correct to me. Maybe it is in another part of the code.
By the way, how are you checking the depth texture content? This kind of texture is not directly displayable on framebuffer. Are you using a shader to transform into gray levels?

yes , but what i am trying to say is that if he want to use the method i posted, he have to use a npot textures .

Correct. We try to stick to the standard 1.1 spec whenever possible, it makes maintaining the application over multiple platforms so much easier. However, if the only way to fix this is to use NPOT textures I can give it a try. Yet another option is to use an FBO.

However, if possible I would first like to know why my current method does not work as I would rather not use extensions if I don’t have too.

I use a fragment shader to do a texture lookup and then display/output the depth value in the red channel of the frame buffer (and I calculate texture coords from 0 to n where n is the width of the viewport divided by the width of the texture).

a fragment shader and you’re sticking to 1.1?
a fbo and you’re sticking to 1.1?
was 1.1 a typo?

I said I was using a fragment shader to test it, I did not say I was using a fragment shader in the application. As for the FBO, I meant it was another option (besides NPOT) if I was not going to stick to the 1.1 spec.

I am so glad you cleared that up. I was all red in the face with sheer panic and confusion. It worried me for hours, I could not get it off my mind. Round and round it went, taunting me. I cursed you for inflicting such suffering on me, pickleworld. But now I can sleep - a chance to dream, at last…
until the next time…

FWIW, on one of my recent projects, I had problems when I tried to do partial depth clears (using scissor test) - it worked fine when the scissor region had an x,y offset of 0, but became wierd otherwise. I can’t remember on which card/vendor I was working on.

The project schedule made me switch to another way of doing thing, I did not double check my results, but I had the impression that partial depth buffer operations are perhaps not the more used features, and that bugs may be lurking in drivers.

Couldn’t you sacrifice some memory and always copy the full depth buffer - and adjust your texture coordinates to access the portion that interests you ?