Depth component texture precission problem

Hello,

I have a problem trying to implement depth-peeling using a CG shader. The issue is that i get only 8-bit precission for the texture containing the depths of the previous layer. I load it with the standard

glTexImage2D( GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, width, height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, data );

And the image i load has more than 8-bit precission (i’ve obtained it by reading the depth buffer from a previous render), i check it by making an histogram.

OpenGL returns ok to the previous command, but in the shader that follows :

void main( 	
in float4 in_color0 : COLOR0,
in float4 in_position : WPOS,
in float4 in_normal : TEXCOORD0,
uniform float window_width
uniform float window_height,
uniform sampler2D front_depth_map,
uniform float3 world_eye_direction,
out float4 out_color : COLOR 
) 

{
 // Epsilon for 8-bit precission 
 float epsilon = 0.004;

 float2 tex_coords = float2(
 	(in_position.x)/(window_width),
        (in_position.y)/(window_height) );

 float front_depth = tex2D( front_depth_map, tex_coords ).x;

  if ( front_depth+epsilon >= in_position.z/in_position.w ) discard;

 out_color = in_color0;

 // Code normal orientation
 if ( dot(world_eye_direction,in_normal)>0.0 )
      out_color.r = out_color.r + 0.5;
}

i get only 8-bit precission for the texture lookup result.

What am i doing wrong? My card is an NV 5650FXGo, supporting nv30 shaders.

Thanks in advance for any hint you could provide, i’ve been trying to find out what’s happening for a long time and a deadline is approaching…

(bandoler)

P.S. Only in case you are wandering… i’m trying to make global illumination.

I know what you mean,

I’ve implemented this using ARB_depth_texture extension, to make sure that the precision is 24bits.

I then implement a ARB_fragment_program where I sample the depth texture.

It works on ATI cards, but not on nVidia cards,
I reported this bug to nvidia’s bugreport system in Dec03 or Jan04, but they haven’t fix it yet.

So, use ARB_depth_texture, then test on ATI card.
I hope nvidia will fix this in new drivers.

/chris

How do you sample a depth texture in ARB_fragment_program such that you get a 16- or 24-bit depth value out of the texture? I couldn’t find anything about this in the ARB_fragment_program spec. …but it’s a big spec.

Try using GL_DEPTH_COMPONENT24 or GL_DEPTH_COMPONENT32 as your internal format. You might also look at the GL_NV_packed_depth_stencil extension.

When you have a texture with a DEPTH_COMPONENT internal format, you can specify whether the texture is interpreted as LUMINANCE, INTENSITY, or ALPHA by calling glTexParameter() with pname GL_DEPTH_TEXTURE_MODE_ARB. This interpretation is used when sampling a depth texture and its GL_TEXTURE_COMPARE_MODE is GL_NONE, and you get the full precision.

Thanks Eric. That was a big help. I’ve got my depth texture reading into my ARB_fp now. But now I have another confusing issue. I’m trying to calculate fragment depth (the same value that would automatically go into result.depth) so that I can compare it with my depth texture values. But the values in the depth texture appear to be non-linear, and I thought most hardware these days had a linear z-buffer. Anyone know how these depth values get computed? (I’m using a Radeon 9800pro).

I think that depth values are linear for orthografic projections and non-linear (exponential) for perspective projection. However i’m not sure if this behavior is configurable.

(bandoler)