gl_FragCoord.z

Hello,
I would like to know in what space is defined the value gl_FragCoord.z. As far as I know gl_FragCoord.x and y are in screen coordinates, but the .z component seems to gives values always between more or less 0.9 and 1.0.
How can I compute the world or eye .z coordinate from gl_FragCoord.z?

Thanks!
Mic

From the GLSL spec:

                                                                                  The fixed functionality computed depth for a fragment may be obtained by reading gl_FragCoord.z, ...

So at the near plane this value is 0 and at the far plane it’s 1.

You can calculate the world/eye z-coordinates the same way gluUnproject does.

N.

and because z is non-linear to favour the nearer triangles, most of a scenes z will be in that very small range (0.9 to 1.0).
You can linearise it by encoding (z-near)/(far-near) in the z column of projection matrix, and pre-multiply z by w in the vertex shader : (z * w) / w = z

Thank you guys, the source code of gluUnproject seems the definitive reference :slight_smile:

I suppose what you actually meant was Zclip = Wclip * (2*(Zview-near)/(far-near) - 1), as OpenGL clip space ranges from -Wclip to +Wclip, or -1 to 1 after perspective division. It’s only the viewport transformation that scales this to [n, f] as given by glDepthRange(n, f).

Your formula would very likely move the near clip plane behind the camera (if far > 2 * near) which can lead to weird artifacts.

But there is another catch: Z interpolation is always linear in screen space, i.e. the difference between Z in neighboring fragments is constant, unlike perspective correct varyings which are usually linear in view space but non-linear in screen space.
Because of this, if you use a perspective projection the per-vertex Z values in screen space must be non-linear. Otherwise the interpolated Z values are wrong, and you could have a small object appear in front of a large polygon even though it’s actually behind.

Just do this in the VS
varying float EyeVertexZ;
EyeVertexZ = (ModelView * gl_Vertex).z;

and in the FS
you can use EyeVertexZ

Thank you V-man, that’s so easy that I cannot expalin how I couldn’t think about it before! Will try it immediately…

It is possible to convert a Z buffer value to the distance with:


float Z = gl_ProjectionMatrix[3].z/(gl_FragCoord.z * -2.0 + 1.0 - gl_ProjectionMatrix[2].z);

This is useful for deferred rendering, but if the distance is already known in a vertex shader, a varying is faster.

In OpenGL, the gl_FragCoord.w is the reciprocal of the clip space w of the fragment, which is just the eye space z (or the negative of it, anyway).

In D3D, frag.w really is just the clip space w, and thus doesn’t need to be reciprocated for use as eye space z.

Sorry for the thread revive…
I have seen this code in a few places - but I cannot seem to figure out the math behind it. Is there an article or tutorial somewhere on this?

Never mind, I figured out the math.
For reference:

Using the projection math from here to get C and D:
http://www.opengl.org/sdk/docs/man/xhtml/glFrustum.xml

we can see that to get the final depth (F) from the view depth (V) (matrix expansion and divide by w and assuming w = 1)

F = (V*C + D) / -V

therefore to get V from F…

F = -C + (D / -V)
F + C = D / -V
(F + C) / D = 1 / -V
D / (F + C) = -V
D / (-F - C) = V

also as F needs scaling from 0…1 to -1…1 we need to F*2-1 which equals:

D / (F * -2 + 1 - C) = V

Which is what was shown above.

Hi all!

Please say, is gl_FragCoord.z/gl_FragCoord.w a distance from the point to camera?

Nope. Check the spec.

gl_FragCoord is window-relative (screen) coordinates. .xy is pixel units of the center of the fragment. .z is the depth value that’ll be written to the depth buffer (0…1). .w is 1/clip_space.w, where (for perspective projections only!!) clip_space.w is -eye_space.z, so: gl_FragCoord.w = -1/eye_space.z … for perspective projections only!

For orthographic projections, gl_FragCoord.w should just be 1 (i.e. 1/clip_space.w, i.e. 1/eye_space.w, where eye_space.w == 1).

For confirmation, see the last row of the perspective and orthographic projection transforms in Appendix F of the Red Book.

So for a perspective projection, you probably want -1/gl_FragCoord.w. Keep in mind this gives you eye-space Z, where values in front of the screen are negative. If you want positive, don’t negate.

Also, you said you wanted a distance from the point to the camera (eyepoint, presumably). This is not what eye_space Z is! eye_space Z is a minimum distance from the XY plane in eye-space to the point, not a radial distance from the eyepoint (0,0,0) to the point. This is the same concept as between EYE_PLANE fog coordinate mode and EYE_RADIAL fog coordinate modes, from the pre-shader days.

Thank you!!!

Then whot is this: float Oz = gl_FragCoord.z/gl_FragCoord.w;

For a perspective projection…
gl_FragCoord.z/gl_FragCoord.w == (0…1 depth value of fragment) * (eye_space Z value of fragment)

Math-wise, this makes no sense to me (maybe I need more caffeine this morning).

Just out of curiousity, where did you get this expression? Googling, I find these top hits:

Your wording suggests the first (oZone3D). This states the expression is “The distance between the camera and the current pixel (the z axis value)”. Sorry, I’m not buying it.

Maybe someone else can make sense of this.

Thanks to all.

And I want to ask one question.

To render out the depth with shader, I think this is the quickest way.

[b]smooth in vec4 ex_Color ;
out vec4 out_Color ;

void main(void){
float depth = gl_FragCoord.z ;
out_Color = vec4(depth, depth, depth, 1.0) ;
}[/b]

however, you mentioned that this gl_FragCoord.z is not linear above.

this is very important to me since this depth render is not for graphic rather an academic computer vision mid ware for me.

then my question is how to linearlize it?

could you just modify my code to give me an answer?

thanks

See Re: Urgent: Accessing Depth Texture FBO in GLSL

thank you!

As a beginner modern GL learner, I still have some questions.

  1. eventually, I want this rendered depth map of an object to store in a OpenCV Mat data type (as a picture).

Is this means I must render this object to frame buffer then do some memory copy to another data type? I even don’t need it to display on the screen. How to achieve this fast and avoid to cost CPU time? it seems your redirect link showed something about glBlitzFrameBuffer(), but I don’t know what function should I use in this case.

  1. http://www.geeks3d.com/20091216/geexlab-how-to-visualize-the-depth-buffer-in-glsl/
    this example seems involved in the texture calculations.
    I’m not sure whether to use texture there, can you just calculate the linear depth from the gl_FragCoord.z?

is there any mature example I can code as a reference? that geek3d lab has only piece of shader program…

  1. I’d display (store) the near as white (1), far as black(0).

so now I modified

glDepthFunc(GL_GEQUAL); //because it reversed, this must be turned
glDepthRange(1.0f, 0.0f); //reverse

and clear depth at display function
glClearDepth(0.0f) ;//since reversed

then my old frag shader works right except the value is not linear.

am I doing this right?

how to modify the geek3d example do the same?

thank you so much.

That would be the easiest way to go. Render the scene, then use glReadPixels to fetch the color (or depth) buffer back to the CPU, then do what you want with it.

glBlitFramebuffer is a function that provides for transferring data from one framebuffer to another. You instead want to get your data back from the framebuffer (and/or a texture) to the CPU.

  1. …this example seems involved in the texture calculations. I’m not sure whether to use texture there, can you just calculate the linear depth from the gl_FragCoord.z?

Even easier: for perspective projections specifically (see posts above), -1/gl_FragCoord.w will be eye-space Z. Just scale and shift that by near/far to get in your desired 0…1 value.

But to your point about texture, if you don’t care about what the color buffer for the scene is, and you just want the color buffer to be a linearized version of the depth buffer, then yes, you can just skip the intermediate texture (and multipass), and do all this in a single pass writing directly to the framebuffer using a simple fragment that basically just pipes your linearized 0…1 value to the color output (using the depth buffer behind-the-scenes for occlusion as normal). So for computing your color value in the frag shader, something like:


gl_FragColor.rrr = ( 1/gl_FragCoord.w - n ) / (f - n); // 0 = near; 1 = far
gl_FragColor.a = 1.0;

  1. I’d display (store) the near as white (1), far as black(0). so now I modified…

I guess you could do that. But even simpler would just be to take your computed 0…1 value in the shader, negate, and add 1.

[quote=“Dark_Photon”]

yes! how didn’t I come out this…

but I knew glReadPixels() was deprecated and very slow…