depth buffer as float texture

I need to confirm, that I am getting the most precision out of the depth buffer.

 
glTexImage2D(GL_TEXTURE_RECTANGLE_NV, 0, GL_LUMINANCE, 800, 600, 0, GL_LUMINANCE, GL_FLOAT, m_z);
glReadPixels(0, 0, 800, 600, GL_DEPTH_COMPONENT, GL_FLOAT, m_z);
glTexSubImage2D(GL_TEXTURE_RECTANGLE_NV, 0, 0, 0, 800, 600, GL_LUMINANCE, GL_FLOAT, m_z);
 

The depth buffer is setup as 24bits.
What I am not sure of is the precision of the copied data. Are there any extensions I need to use to get the full 24bit precision.
I am using the float texture in a fragment shader.

That probably will reduce you to 8-bit precision.

What you should do is use a DEPTH_COMPONENT24 texture and use a glCopyTexSubImage().

Make sure to not have the shadow compare turned on if you want to actually read the depth.

Thanks -
Cass

glCopyTexSubImage() does only copy the color framebuffer… or is there a way to make it copy the depth buffer?
glReadPixels() doesn’t support DEPTH_COMPONENT24

What do you mean with “shadow compare” ?

glCopyTexSubImage() does copy depth when you set internal texture format to GL_DEPTH_COMPONENT (or GL_DEPTH_COMPONENT24).

No, when you use DEPTH_COMPONENT24, then glCopyTexSubImage will copy from the z-buffer into a texture.

Read the spec about ARB_depth_texture .

Jan.

ARB_depth_texture, that’s what I was looking for. I overlooked it because it is always mentioned with shadow casting, and thats not what I need it for…
Thanks.

I don’t get any errors but the texture still ends up with 8 bit precision in my fragment shader… :frowning:

glTexImage2D(GL_TEXTURE_RECTANGLE_NV, 0, GL_DEPTH_COMPONENT24, 800, 600, 0, GL_DEPTH_COMPONENT, GL_FLOAT, m_z);
glCopyTexImage2D(GL_TEXTURE_RECTANGLE_NV, 0, GL_DEPTH_COMPONENT24, 0, 0, 800, 600, 0);
 

this is from the NVSDK shadow map demo:
when creating the texture, use:
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24_ARB, TEX_SIZE, TEX_SIZE, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE_ARB, GL_COMPARE_R_TO_TEXTURE_ARB);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC_ARB, GL_LEQUAL);

when copy the data out, use:
glCopyTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 0, 0, lightshaper.width, lightshaper.height);

note that the texture isn’t really in float format, and you’ll have to use the third component(this is also why depth texture doesn’t support cubemap) of the texcoord to get the binary result(in or not in shadow). There’s no way to get the real depth value through depth texture, AFAIK.

Hope it helps, regards.

The only way I get this to work is by doing this:

glReadPixels(0, 0, 800, 600, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, m_z);
glTexSubImage2D(GL_TEXTURE_RECTANGLE_NV, 0, 0, 0, 800, 600, GL_RGBA, GL_UNSIGNED_BYTE, m_z);

This way I get 3 bytes representing an integer as GBR data in my shader which I can reassemble into a 24bit integer value there.
The glReadPixels conversion into GL_UNSIGNED_INT takes twice as long as when using GL_FLOAT, but I don’t know how to reassemble the float.
Is there a way to reassemble a float from 4 bytes? I guess that would be a shader question then…

Just disabling shadow compare (as cass pointed out) will give you the depth value. I discovered an issue with that tough, you have to disable linear filtering or you will get an 8 bit value. (Using nearest filtering gives you the proper high precision values) I guess the card (nvidia 6800 in my case) doesn’t have 24bit interpolation hardware.

Charles

!!!GREAT!!! It works! Thanks Charles.

Of cause I had linear filtering on… but it makes sense, too. Just as filtering of float textures isn’t available on older hardware.

can it work with cubemap if shadow compare is disabled?

I would expect so. Note that you can’t do the COMPARE function with cube maps, but you should be able to create a DEPTH_COMPONENT cube map and index into it.

Let me know if you run into any problems with this.

Thanks -
Cass