Is there a way to compute the eye space coordinate from a clip space coordinate and depth value in GLSL (gluUnproject in GLSL, so to speak)? How please?
Is there a way to compute the eye space coordinate from a clip space coordinate and depth value in GLSL (gluUnproject in GLSL, so to speak)? How please?
Do you really mean clip-space and not window-space? Because the transform from eye-space to clip-space is just a matrix (the perspective matrix). Therefore, the transform back would be a transformation by the inverse of that matrix.Is there a way to compute the eye space coordinate from a clip space coordinate and depth value in GLSL (gluUnproject in GLSL, so to speak)? How please?
Window-space is more complex and requires that you provide the shader with the viewport transform.
Yeah, window (screen) space. what I want is to compute the eye space coordinate of a pixel in the frame buffer and that pixel's depth value.
* http://www.opengl.org/discussion_boa...935#Post277935Originally Posted by karx11erx
* http://www.opengl.org/discussion_boa...473#Post276473
Thx. Is there a viewport transformation matrix? Where do I retrieve it?
Oh, sorry. You did ask about the full eye-space position of the pixel, not just its Z coordinate. Here:
http://www.opengl.org/discussion_boa...242#Post288242
See the routine at the bottom of that post. There are all sorts of ways to skin this cat.
...and on that note, here are a few related posts you might find interesting that describe exactly that:
* http://mynameismjp.wordpress.com/200...on-from-depth/
* http://mynameismjp.wordpress.com/200...pth-continued/
That routine I pointed you to presumes glViewport( 0, 0, width, height) -- where widthInv = 1/width and heightInv = 1/height.Thx. Is there a viewport transformation matrix? Where do I retrieve it?
Ok, thank you, understood all that after a while of pondering on the code.
One thing that doesn't work well for me is your EyeZ formula. It works better for me this way:
#define EyeZ(_z) (zFar / (zFar - zNear)) / ((zFar / zNear) - (_z))
@DarkPhoton,
I've been wondering for some time to remove from my deferred render G-Buffer the 32-bit eye space XYZ position vector and replace it with maths to reconstruct Zeye from the depth texture instead. However, I have never found a suitable post containing everything I need to do this and when I have attempted it the results were wrong.
What I'd like to do is converth from depth texture Z to NDC Z (along with constructing NDC x and y). Then convert from NDC to EYE space.
I note from the reference you gave here
that you have calculated Zeye from the depth texture and projection matrix. However, I ran through your algebra and whilst I'm no wizard at it, I did spot that your Zeye comes out wrong (at the point when you converted from -Zndc to Zndc)
You end up with
...but I ended up withCode :float z_eye = gl_ProjectionMatrix[3].z/(z_viewport * -2.0 + 1.0 - gl_ProjectionMatrix[2].z);
Code :float z_eye = -gl_ProjectionMatrix[3].z/ ( (z_viewport * -2.0) + 1.0 - gl_ProjectionMatrix[2].z);
I had no problem with your parellel projection however.
my working out of each term, step by step:
Code :z_ndc = z_clip / w_clip z_ndc = [ z_eye*gl_ProjectionMatrix[2].z + gl_ProjectionMatrix[3].z ] / -z_eye z_ndc = [ z_eye*gl_ProjectionMatrix[2].z] / -z_eye + gl_ProjectionMatrix[3].z / -z_eye; //separating out the terms z_ndc = -gl_ProjectionMatrix[2].z + gl_ProjectionMatrix[3].z / -z_eye; //cancelling out z_eye z_ndc + gl_ProjectionMatrix[2].z = gl_ProjectionMatrix[3].z / -z_eye; //re arranging (z_ndc + gl_ProjectionMatrix[2].z) * -z_eye = gl_ProjectionMatrix[3].z; //re arranging z_eye -z_eye = gl_ProjectionMatrix[3].z / (z_ndc + gl_ProjectionMatrix[2].z) //re arranging z_eye to LHS z_eye = -1 * [ gl_ProjectionMatrix[3].z / (z_ndc + gl_ProjectionMatrix[2].z) ] //removing -ve term from z_eye z_eye = -gl_ProjectionMatrix[3].z / (-z_ndc - gl_ProjectionMatrix[2].z) //removing -ve term from z_eye float z_eye = -gl_ProjectionMatrix[3].z/((-z_viewport * 2.0) + 1.0 - gl_ProjectionMatrix[2].z); //subsitute Z_ndc = z_viewport * 2.0 + 1.0
So it seems quite easy to obtain NDC space position:
and the conversion to EYE spaceCode :ndc.x = ((gl_FragCoord.x * widthInv) - 0.5) * 2.0; ndc.y = ((gl_FragCoord.y * heightInv) - 0.5) * 2.0; z_ndc = (z_viewport * 2.0) - 1.0; //z_viewport is the depth texture sample value (0..1) range
Code :z_eye = -gl_ProjectionMatrix[3].z/(z_viewport * -2.0 + 1.0 - gl_ProjectionMatrix[2].z);
but the X and Y EYE space conversions trouble me because I can't figure out what RIGHT and TOP are. (I assume near is near clip value, typically 0.5 for example when used with gluPerspective)
Code :eye.x = (-ndc.x * eye.z) * right/near; eye.y = (-ndc.y * eye.z) * top/near;
Also, is there a way to remove right/near and top/near and use a value picked from the Projection matrix instead? I'd rather not have to supply a uniform to pass in those two values and it seems a shame to do so when everything else can be calculated from the depth texture, projectionmatrix and viewport dimensions.
I am a step further, but shadow maps still do not work quite right for me.
gl_TextureMatrix [2] contains light projection * light modelview * inverse (camera modelview). With that and the following shader code:
Code :uniform sampler2D sceneColor; uniform sampler2D sceneDepth; uniform sampler2D shadowMap; #define ZNEAR 1.0 #define ZFAR 5000.0 #define ZRANGE (ZFAR - ZNEAR) #define EyeZ(screenZ) (ZFAR / ((screenZ) * ZRANGE - ZFAR)) void main() { float colorDepth = texture2D (sceneDepth, gl_TexCoord [0]).r; vec4 ndc; ndc.z = EyeZ (colorDepth); ndc.xy = vec2 ((gl_TexCoord [0].xy - vec2 (0.5, 0.5)) * 2.0 * -ndc.z); ndc.w = 1.0; vec4 ls = gl_TextureMatrix [2] * ndc; float shadowDepth = texture2DProj (shadowMap, ls).r; float light = 0.25 + ((colorDepth < shadowDepth + 0.0005) ? 0.75 : 0.0); gl_FragColor = vec4 (texture2D (sceneColor, gl_TexCoord [0].xy).rgb * light, 1.0); }
The shadowmap projection doesn't work right. Depending on camera orientation, the shadow moves around a bit. When the camera moves close to the floor, the shadow depth values get larger until the shadow disappears. What also happens is that the shadow is projected on faces behind the light (i.e. in reverse direction). What's the reason of that all?
What I also had expected that I would have to apply the inverse of the camera's projection matrix to ndc (ndc are projected coordinates, right? If so, I thought I'd have to unproject, untranslate, unrotate from the camera view, then rotate, translate and project in the light view to access the proper shadow map value). When I however unproject ndc with the inverse camera projection, shadow mapping doesn't work at all anymore.
Images:
Rockets don't cast shadows (shadow depth too large):
Camera pointing forward (btw, where's that shadow artifact coming from?):
Camera pointed up a bit (same position) -> shadow looks different:
The args of glFrustum you'd otherwise pass it defining your view frustum.Originally Posted by BionicBytes
Right, also the args of glFrustum. Specifically, they are the negatives of eye-space Z.(I assume near is near clip value, typically 0.5 for example when used with gluPerspective)
Probably. Give it a go.Also, is there a way to remove right/near and top/near and use a value picked from the Projection matrix instead?
Don't forget that you can just interpolate a view vector across your surface and use that to reconstruct the position too.
More posts by Matt Pettineo on this:
* http://mynameismjp.wordpress.com/201...-from-depth-3/
* http://mynameismjp.wordpress.com/201...th-glsl-style/