FBO to Depth Texture Precision

Hi. I’m trying to implement z-buffer shading, in which a fragment’s normal is derived by taking the gradient via central differencing across the fragment’s corresponding texels in a depth texture. The general idea for computing the normal is:


   normal.x = depth to the right - depth to the left
   normal.y = depth above - depth below
   normal.z = 1 / 2 ^ bits of precision
   normal = normalize(normal)

Essentially, I’m computing a normal map on the fly. If I was computing this offline on an unsigned byte image, normal.z would be 1. However, I’m trying to do this in a fragment program, and I’m confused on what normal.z should be. My thoughts are that normal.z should be the reciprocal of the number of unique values for the given precision. For 8-bit depth this would be 1/256, 16-bit would be 1/2^16, 24-bit would be 1/2^24, etc. The value of normal.z should correspond with a unit change of depth.

But with a 24-bit depth texture, it seems that normal.z = 1/256 is correct. Any lower values washes out normal.xy. Here’s a run with 1/256 and a 24-bit depth texture:

This result makes me doubt that I’m getting 24-bit precision out of the depth texture. Should I be getting 24-bit values when I do texture lookups?

Here’s my fragment shader that does the lookup and shading:

uniform sampler2D depth_map;
uniform float normal_z;
uniform vec2 e1;
uniform vec2 e2;
const vec3 Kd = vec3(0.8, 0.8, 0.8);
const vec3 Ka = vec3(0.2, 0.2, 0.2);
const vec3 Ks = vec3(1.0, 1.0, 1.0);
const float shininess = 90.0;

vec3 shading(vec3 src, vec3 N, vec3 L, vec3 V) {
   float n_dot_l = dot(N, L);
   vec3 H = normalize(L + V);
   vec3 specular = Ks * pow(max(dot(H, N), 0.0), shininess);
   vec3 diffuse = Kd * max(n_dot_l, 0.0);
   return (diffuse + Ka) * src + specular;
}

void main() {
   vec2 depth_coord = gl_FragCoord.xy / 512.0;
   vec4 src = vec4(1.0);
   vec3 light_vec = normalize(gl_LightSource[0].position.xyz);

   vec3 normal;
   normal.x = texture2D(depth_map, depth_coord + e1).r -
              texture2D(depth_map, depth_coord - e1).r;
   normal.y = texture2D(depth_map, depth_coord + e2).r -
              texture2D(depth_map, depth_coord - e2).r;
   normal.z = normal_z;
   normal = normalize(normal);

   src.rgb = shading(src.rgb, normal, light_vec, light_vec);
   gl_FragColor = vec4(src.rgb, 1.0);
}

The viewport is 512x512. I index into the depth texture by taking the scaled screen coordinate. The light source and eye are at the same location, and the model is white. To do the central differencing, I find the neighbors of the fragment’s texel using e1 = vec2(1/texwidth, 0) and e2 = vec2(0, 1/texheight). I purposefully use an orthographic projection and place the near and far clipping planes inside the model so that I can have an ample number of unique depth values. The model is supposed to be faceted. The depth gradient doesn’t change across a polygon in an orthographic projection.

My depth texture and FBO are setup like so:


   glGenTextures(1, &tex_id);
   glBindTexture(GL_TEXTURE_2D, tex_id);
   glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, FBO_SIZE,
                FBO_SIZE, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
   glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
   glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
   glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
   glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
   glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_NONE);
   glTexParameteri(GL_TEXTURE_2D, GL_DEPTH_TEXTURE_MODE, GL_LUMINANCE);


   int depth_bits;
   glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_DEPTH_SIZE,
                            &depth_bits);
   std::cout << "texture depth_bits: " << depth_bits << std::endl;

   glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, depth_fbo);
   glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT,
                             GL_TEXTURE_2D, tex_id, 0);
   glDrawBuffer(GL_NONE);
   glReadBuffer(GL_NONE);

   glGetIntegerv(GL_DEPTH_BITS, &depth_bits);
   std::cout << "depth_bits: " << depth_bits << std::endl;

   GLenum status = glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT);
   if (status != GL_FRAMEBUFFER_COMPLETE_EXT) {
      CJ_ALERT("Bad status: %x", (int) status);
   }

I’m getting 24 for both the glGetInteger calls and no error. But I don’t believe I’m getting that precision in the fragment shader. Does anyone see where I might be going wrong? How do high-precision depth textures and GLSL work together?

The orange book and other posts state that I need to use a shadow2DSampler for doing depth texture lookups. I’ve tried this and I get identical results.

Thanks for any help.

  • Chris

Yes, you should get more than 8 bits of precision in the shader.

What OS and hardware are you using?

If I assume that you can use a 24bits depth buffer, the read depth values have a fixed point format so they are normalized between 0 and 1.
One thing you can do is encode the depth value read a a current pixel into a rgb texture in each channel using some bits shifting. You will need to do that with another shader and then give the encoded texture to the current shader and to decode it.

One other thing, more simple, if you just need to compute normals at fragment, you can use the glsl functions dFdx/dFdy to compute texture coordinates derivatives and find tangent space at each fragment.

One last thing even more simple is to precompute normals per vertex in your application but I assume it is not the only purpose of your shader.

Has anyone done that successfully? I have tried various approaches, but the results are far from perfect.

Basically, once in a while there’s something that looks like half-a-bit-error. Which is just a small difference when decoded, but if the decoded value is multiplied by something large (e.g. to bring it into camera’s far plane range), then the errors become large enough to be unusable.

I did a shader to do this a while ago, but I don’t know if it is affected by the " half-a-bit-error"


void main()
{	
	float dist= gl_FragCoord.z; // between 0 and 1
	vec3 comp; // stores the 24 bits depth value in each 8 bits components
	
	comp.x=dist*65536.0;

    comp.y=dist*256.0;

    comp.z=dist;
    
    comp.x=fract(comp.x);

    comp.y=fract(comp.y)-comp.x/256.0;

    comp.z=fract(comp.z)-comp.y/256.0;
	
	//float dr= comp.x/65536.0 + comp.y/256.0 + comp.z;
	
	gl_FragColor= vec4(comp.x, comp.y, comp.z, 0.0);
    //vec3 normal = normalize(normal);
	//gl_FragColor = vec4(normal, 0.0);
}

then to decode it:


	depthRGB = texture2D(shadowMap, uv);
	
	float depth = depthRGB.r/65536.0 + depthRGB.g/256.0 + depthRGB.b;

where shadowMap is the encoded depth buffer.

Actually when I used this technique, I had some unresolved biais problems ( I had to put a huge biais to to compare the fetched depth with another one) and I had big artifacts problems where the light rays are at the mesh grazing angles.
But I really don’t know if it is due to the encoded depth buffer, maybe it was in the rest of the shader

I don’t know if this is related to your problems. I had problems on gf6800 with limited (looks like 8bit) precision of depth textures sampled in fragment shader with texture2D (not texture2DShadow) function (compare mode = none) for the purposes of depth of field effect. On gf8600 everything works fine, texture seems to be sampled and definately more than 8bit precision, but on gf6800 the same code produces artifacts due to precision.
The solution was: GL_NEAREST for both min & mag filters for depth texture. Now the texture seems to be full 24bit (this is a depth_component_24 texture).

I’ve seen that on the 5200 under OS X as well.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.