PDA

View Full Version : depth: fragment.position.z?

satchmo
08-16-2004, 10:20 AM
I'm trying to implement the Depth of Field algorithm that was presented by Thorsten Scheuermann at GDC this year, but I'm having some problems determining the depth of each fragment.
As far as I understand the ARB_fragment_program specifications, a fragment's depth is contained in fragment.position.z.

I have a scene with lots of objects spaced evenly between the near an far clip planes. Now if I do a simple

MOV result.color, fragment.position.z;

I was expecting a smoothly shaded image with white objects at the back, grey objects in the middle and black objects at the front. But the results seem to be all white for the most part, with only those fragments right in front of the camera darkening at all (and not much at that).

Now the objects are spaced from 0 units to 30 units from the camera, but the results are the same whether the far clip plane is at 30 units or 300,000! Am I right in presuming that this depth is in the range [0,1], with 0 at the near clip plane and 1 at the far clip plane?

I can only presume that I'm missing something fundamental, can someone please fill me in?!

dorbie
08-16-2004, 10:34 AM
The distribution of z can be very non linear, depending on where you place your near & far clip.

http://www.cs.unc.edu/~hoff/techrep/openglz.html

http://www.cs.unc.edu/~hoff/techrep/perspective.doc

Eric Lengyel
08-16-2004, 06:15 PM
To get a linear depth, you're probably best off doing a separate transformation of your vertex position using only the modelview matrix instead of the the modelview-projection product. Do this in your vertex program and store the result (just the z if that's all you need -- the one DP4 instruction below) in a texcoord output. Then use that value in your fragment program. Don't forget that your z's will be negative when they're in front of the camera when you do this.

DP4 result.texcoord[1].z, state.matrix.modelview.row[2], vertex.position;

-- Eric Lengyel

satchmo
08-17-2004, 10:30 AM
Ah of course, working out the depth in view space makes a helluva lot more sense than trying to untangle the projected depth, and it means I can specify my focal plane in GL units which is much more intuitive than depth relative to the clipping planes. Altogether it reduces computation of the blur factor to four instructions. Thanks Eric!

And thanks to dorbie for those links, they definitely cleared up a few things for me.

-satch