Help with eyeZ calculation?

There has been a good bit of discussion about this lately, and it’s been very helpful. I am trying to compute EyeZ from a Z buffer read. This is the math that I am using.


vec2 DepthParameter;
DepthParameter.x = FARZ * ONE_OVER_FAR_MINUS_NEAR;
DepthParameter.y = FARZ * NEARZ * ONE_OVER_NEAR_MINUS_FAR;
float EyeZ = -DepthParameter.y / ( DepthParameter.x - TextureDepth.r );

If I output the following, I see what I would expect. Black to white depth.


float ColorZ = EyeZ / (FARZ - NEARZ);
gl_FragColor = vec4(ColorZ, ColorZ, ColorZ, 1.0);

I then try and use this for edge detection, I sample adjacent depths,and if the difference in depth is greater than some value, It’s a silhouette edge.

My issue is that my EyeZ doesn’t seem to be linear. So the difference scales as it gets farther away from the near clipping plane. Any thoughts?

Thanks,

You should take the perspective foreshortening into account. I drew you a small picture illustrating the effect.
As you can see, the difference in eye space depth value of neighboring pixels increases if you move the object to the far plane. This is not the case for an orthogonal projection.

This should work: gl_ProjectionMatrix[3].z/(Z * -2.0 + 1.0 - gl_ProjectionMatrix[2].z)

Interesting Sunray, I tried this equation, I actually derived that equation in 99 when I did the Unreal OpenGL work, it gives the same results as the above equation. Same issue with non linear Z. That is to say that I test 2 Z’s that are fairly close together at some fixed epsilon 5-10 units in my case, and they always test positive while far away, and the edges only test positive when I get closer.

NiCo, how would you suggest I deal with foreshortening? Use 2 ranges lerp’d over the depth range?

In light of your application, I doubt there’s a way to handle this correctly. IMHO, the best you can do is test the difference between eyeZ coordinates like you’re doing already. This way, at least the difference in neighboring pixels for fronto-parallel surfaces should remain constant when translating your scene along the camera axis.