Comparing Depth in Fragment Shader

I’m trying to do some depth peeling. I write glFragCoord.z into a texture on the first pass in the alpha channel. The texture is a float texture with 32-bits per channel.

Then on my subsequent passes i compare using the following. Its definitely not what I expect. It appears to be randomly drawing and I’m sort of lost at what is going on.


if( gl_FragCoord.z  < texture2D(texture, gl_TexCoord[0].st).w )
            discard;

It seems that the depths are being packed very close to 1. I was under the assumption they would all be close to 0 since everything is pretty close to the camera.

This draws everything


if( gl_FragCoord.z  < 0.99 )
            discard;

This draws nothing


if( gl_FragCoord.z  < 1.0 )
            discard;

I apparently don’t quite understand how depth is handled in the fragment shader.

Any help would be appreciated.

Thanks

–Tom

Nope. Close to 1 is most likely right, and strongly suggests you’re using a perspective projection (though you didn’t state that).

gl_FragCoord.z is the depth value that’ll be written to the depth buffer (0…1).

This draws everything


if( gl_FragCoord.z  < 0.99 )
            discard;
This draws nothing

if( gl_FragCoord.z  < 1.0 )
            discard;

Ok, so 0.99 <= gl_FragCoord.z <= 1.0.

With the standard perspective projection, most of your precision is concentrated close to the near plane, with less and less precision devoted the further and further out.

Another way to read this is that the quantizated depth value “steps” are clustered really, really close together by the near plane and space out further and further the further you get out into the scene. So most objects “out in the scene” i.e. not really close to the near plane) will have depth values up toward 1. This is the reason why you can push your far clip out to infinity and in practice not lose that much precision. There wasn’t that much being used out there anyway!

For details, check out the Z and W transform in the perspective projection matrix. Divide the result, and you’ll see that this yields:

z_ndc = ( f + n + 2fn/z_eye )/( f-n )

You see that z_eye makes an appearance above as its recipocal. That’s the kicker that gives you the clustering of precision by the near plane. z_eye is of course negative in OpenGL, with z_eye in -n…-f.

Remember z_ndc is in -1…1. To map to framebuffer Z value using a typical depth range of 0…1, then just *0.5+0.5. This is gl_FragCoord.z.

So to get a more linearly-varying value, invert gl_FragCoord.z.

By the way, one kind of interesting thing to try is to plug in z_eye = -(n+f)/2 (the plane half-way between the near and far clip planes) into the above equation. With a little algebra, you can see that you get z_ndc = (f-n)/(f+n) out of it. Plug n=1, f=1000 for instance, and you can see that z_ndc = 0.998 – not 0, the NDC midpoint – 0.998. Now shift and scale NDC (-1…1) to 0…1 (window coords) – i.e. *0.5+0.5, and you can see the resulting depth value is ~0.999! And that’s for a point half-way between the near and far clip planes!

So as you can see, with the standard perspective projection matrix. You use up 99.9% of your precision between the near plane and the midpoint between the near and far planes. That’s usually OK though, and often what you want, due to perspective foreshortening. You can see small depth steps close to the eye a lot better than way off in the distance.

Ok, that explains somethings. But, regardless I still don’t know why my comparison against the previous depth value doesn’t work. Maybe I missed something, but the values are different at some precision, so it should correctly discard pixels. Am I incorrect?

When in doubt break out the ole math library and crunch a few numbers. Maybe even plot a graph or two to get a good lay of the land. Seeing is believing (or is it believing is seeing).

I think this will be helpful…

http://www.geeks3d.com/20091216/geexlab-how-to-visualize-the-depth-buffer-in-glsl/

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.