karx11erx

04-14-2011, 01:10 PM

Is there a way to compute the eye space coordinate from a clip space coordinate and depth value in GLSL (gluUnproject in GLSL, so to speak)? How please?

View Full Version : Compute eye space coord from window space coord?

karx11erx

04-14-2011, 01:10 PM

Is there a way to compute the eye space coordinate from a clip space coordinate and depth value in GLSL (gluUnproject in GLSL, so to speak)? How please?

Alfonse Reinheart

04-14-2011, 01:45 PM

Is there a way to compute the eye space coordinate from a clip space coordinate and depth value in GLSL (gluUnproject in GLSL, so to speak)? How please?

Do you really mean clip-space and not window-space? Because the transform from eye-space to clip-space is just a matrix (the perspective matrix). Therefore, the transform back would be a transformation by the inverse of that matrix.

Window-space is more complex and requires that you provide the shader with the viewport transform.

Do you really mean clip-space and not window-space? Because the transform from eye-space to clip-space is just a matrix (the perspective matrix). Therefore, the transform back would be a transformation by the inverse of that matrix.

Window-space is more complex and requires that you provide the shader with the viewport transform.

karx11erx

04-14-2011, 03:15 PM

Yeah, window (screen) space. what I want is to compute the eye space coordinate of a pixel in the frame buffer and that pixel's depth value.

Dark Photon

04-14-2011, 05:20 PM

what I want is to compute the eye space coordinate of a pixel in the frame buffer and that pixel's depth value.

* http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=277935#Post2779 35

* http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=276473#Post2764 73

* http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=277935#Post2779 35

* http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=276473#Post2764 73

karx11erx

04-14-2011, 05:28 PM

Thx. Is there a viewport transformation matrix? Where do I retrieve it?

Dark Photon

04-14-2011, 05:50 PM

Oh, sorry. You did ask about the full eye-space position of the pixel, not just its Z coordinate. Here:

http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=288242#Post2882 42

See the routine at the bottom of that post. There are all sorts of ways to skin this cat.

...and on that note, here are a few related posts you might find interesting that describe exactly that:

* http://mynameismjp.wordpress.com/2009/03/10/reconstructing-position-from-depth/

* http://mynameismjp.wordpress.com/2009/05/05/reconstructing-position-from-depth-continued/

Thx. Is there a viewport transformation matrix? Where do I retrieve it?

That routine I pointed you to presumes glViewport( 0, 0, width, height) -- where widthInv = 1/width and heightInv = 1/height.

http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=288242#Post2882 42

See the routine at the bottom of that post. There are all sorts of ways to skin this cat.

...and on that note, here are a few related posts you might find interesting that describe exactly that:

* http://mynameismjp.wordpress.com/2009/03/10/reconstructing-position-from-depth/

* http://mynameismjp.wordpress.com/2009/05/05/reconstructing-position-from-depth-continued/

Thx. Is there a viewport transformation matrix? Where do I retrieve it?

That routine I pointed you to presumes glViewport( 0, 0, width, height) -- where widthInv = 1/width and heightInv = 1/height.

karx11erx

04-14-2011, 06:34 PM

Ok, thank you, understood all that after a while of pondering on the code.

One thing that doesn't work well for me is your EyeZ formula. It works better for me this way:

#define EyeZ(_z) (zFar / (zFar - zNear)) / ((zFar / zNear) - (_z))

One thing that doesn't work well for me is your EyeZ formula. It works better for me this way:

#define EyeZ(_z) (zFar / (zFar - zNear)) / ((zFar / zNear) - (_z))

BionicBytes

04-15-2011, 03:41 AM

@DarkPhoton,

I've been wondering for some time to remove from my deferred render G-Buffer the 32-bit eye space XYZ position vector and replace it with maths to reconstruct Zeye from the depth texture instead. However, I have never found a suitable post containing everything I need to do this and when I have attempted it the results were wrong.

What I'd like to do is converth from depth texture Z to NDC Z (along with constructing NDC x and y). Then convert from NDC to EYE space.

I note from the reference you gave here (http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=288242#Post2882 42)

that you have calculated Zeye from the depth texture and projection matrix. However, I ran through your algebra and whilst I'm no wizard at it, I did spot that your Zeye comes out wrong (at the point when you converted from -Zndc to Zndc)

You end up with

float z_eye = gl_ProjectionMatrix[3].z/(z_viewport * -2.0 + 1.0 - gl_ProjectionMatrix[2].z);

...but I ended up with

float z_eye = -gl_ProjectionMatrix[3].z/ ( (z_viewport * -2.0) + 1.0 - gl_ProjectionMatrix[2].z);

I had no problem with your parellel projection however.

my working out of each term, step by step:

z_ndc = z_clip / w_clip

z_ndc = [ z_eye*gl_ProjectionMatrix[2].z + gl_ProjectionMatrix[3].z ] / -z_eye

z_ndc = [ z_eye*gl_ProjectionMatrix[2].z] / -z_eye + gl_ProjectionMatrix[3].z / -z_eye; //separating out the terms

z_ndc = -gl_ProjectionMatrix[2].z + gl_ProjectionMatrix[3].z / -z_eye; //cancelling out z_eye

z_ndc + gl_ProjectionMatrix[2].z = gl_ProjectionMatrix[3].z / -z_eye; //re arranging

(z_ndc + gl_ProjectionMatrix[2].z) * -z_eye = gl_ProjectionMatrix[3].z; //re arranging z_eye

-z_eye = gl_ProjectionMatrix[3].z / (z_ndc + gl_ProjectionMatrix[2].z) //re arranging z_eye to LHS

z_eye = -1 * [ gl_ProjectionMatrix[3].z / (z_ndc + gl_ProjectionMatrix[2].z) ] //removing -ve term from z_eye

z_eye = -gl_ProjectionMatrix[3].z / (-z_ndc - gl_ProjectionMatrix[2].z) //removing -ve term from z_eye

float z_eye = -gl_ProjectionMatrix[3].z/((-z_viewport * 2.0) + 1.0 - gl_ProjectionMatrix[2].z); //subsitute Z_ndc = z_viewport * 2.0 + 1.0

So it seems quite easy to obtain NDC space position:

ndc.x = ((gl_FragCoord.x * widthInv) - 0.5) * 2.0;

ndc.y = ((gl_FragCoord.y * heightInv) - 0.5) * 2.0;

z_ndc = (z_viewport * 2.0) - 1.0; //z_viewport is the depth texture sample value (0..1) range

and the conversion to EYE space

z_eye = -gl_ProjectionMatrix[3].z/(z_viewport * -2.0 + 1.0 - gl_ProjectionMatrix[2].z);

but the X and Y EYE space conversions trouble me because I can't figure out what RIGHT and TOP are. (I assume near is near clip value, typically 0.5 for example when used with gluPerspective)

eye.x = (-ndc.x * eye.z) * right/near;

eye.y = (-ndc.y * eye.z) * top/near;

Also, is there a way to remove right/near and top/near and use a value picked from the Projection matrix instead? I'd rather not have to supply a uniform to pass in those two values and it seems a shame to do so when everything else can be calculated from the depth texture, projectionmatrix and viewport dimensions.

I've been wondering for some time to remove from my deferred render G-Buffer the 32-bit eye space XYZ position vector and replace it with maths to reconstruct Zeye from the depth texture instead. However, I have never found a suitable post containing everything I need to do this and when I have attempted it the results were wrong.

What I'd like to do is converth from depth texture Z to NDC Z (along with constructing NDC x and y). Then convert from NDC to EYE space.

I note from the reference you gave here (http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=288242#Post2882 42)

that you have calculated Zeye from the depth texture and projection matrix. However, I ran through your algebra and whilst I'm no wizard at it, I did spot that your Zeye comes out wrong (at the point when you converted from -Zndc to Zndc)

You end up with

float z_eye = gl_ProjectionMatrix[3].z/(z_viewport * -2.0 + 1.0 - gl_ProjectionMatrix[2].z);

...but I ended up with

float z_eye = -gl_ProjectionMatrix[3].z/ ( (z_viewport * -2.0) + 1.0 - gl_ProjectionMatrix[2].z);

I had no problem with your parellel projection however.

my working out of each term, step by step:

z_ndc = z_clip / w_clip

z_ndc = [ z_eye*gl_ProjectionMatrix[2].z + gl_ProjectionMatrix[3].z ] / -z_eye

z_ndc = [ z_eye*gl_ProjectionMatrix[2].z] / -z_eye + gl_ProjectionMatrix[3].z / -z_eye; //separating out the terms

z_ndc = -gl_ProjectionMatrix[2].z + gl_ProjectionMatrix[3].z / -z_eye; //cancelling out z_eye

z_ndc + gl_ProjectionMatrix[2].z = gl_ProjectionMatrix[3].z / -z_eye; //re arranging

(z_ndc + gl_ProjectionMatrix[2].z) * -z_eye = gl_ProjectionMatrix[3].z; //re arranging z_eye

-z_eye = gl_ProjectionMatrix[3].z / (z_ndc + gl_ProjectionMatrix[2].z) //re arranging z_eye to LHS

z_eye = -1 * [ gl_ProjectionMatrix[3].z / (z_ndc + gl_ProjectionMatrix[2].z) ] //removing -ve term from z_eye

z_eye = -gl_ProjectionMatrix[3].z / (-z_ndc - gl_ProjectionMatrix[2].z) //removing -ve term from z_eye

float z_eye = -gl_ProjectionMatrix[3].z/((-z_viewport * 2.0) + 1.0 - gl_ProjectionMatrix[2].z); //subsitute Z_ndc = z_viewport * 2.0 + 1.0

So it seems quite easy to obtain NDC space position:

ndc.x = ((gl_FragCoord.x * widthInv) - 0.5) * 2.0;

ndc.y = ((gl_FragCoord.y * heightInv) - 0.5) * 2.0;

z_ndc = (z_viewport * 2.0) - 1.0; //z_viewport is the depth texture sample value (0..1) range

and the conversion to EYE space

z_eye = -gl_ProjectionMatrix[3].z/(z_viewport * -2.0 + 1.0 - gl_ProjectionMatrix[2].z);

but the X and Y EYE space conversions trouble me because I can't figure out what RIGHT and TOP are. (I assume near is near clip value, typically 0.5 for example when used with gluPerspective)

eye.x = (-ndc.x * eye.z) * right/near;

eye.y = (-ndc.y * eye.z) * top/near;

Also, is there a way to remove right/near and top/near and use a value picked from the Projection matrix instead? I'd rather not have to supply a uniform to pass in those two values and it seems a shame to do so when everything else can be calculated from the depth texture, projectionmatrix and viewport dimensions.

karx11erx

04-15-2011, 06:06 AM

I am a step further, but shadow maps still do not work quite right for me.

gl_TextureMatrix [2] contains light projection * light modelview * inverse (camera modelview). With that and the following shader code:

uniform sampler2D sceneColor;

uniform sampler2D sceneDepth;

uniform sampler2D shadowMap;

#define ZNEAR 1.0

#define ZFAR 5000.0

#define ZRANGE (ZFAR - ZNEAR)

#define EyeZ(screenZ) (ZFAR / ((screenZ) * ZRANGE - ZFAR))

void main()

{

float colorDepth = texture2D (sceneDepth, gl_TexCoord [0]).r;

vec4 ndc;

ndc.z = EyeZ (colorDepth);

ndc.xy = vec2 ((gl_TexCoord [0].xy - vec2 (0.5, 0.5)) * 2.0 * -ndc.z);

ndc.w = 1.0;

vec4 ls = gl_TextureMatrix [2] * ndc;

float shadowDepth = texture2DProj (shadowMap, ls).r;

float light = 0.25 + ((colorDepth < shadowDepth + 0.0005) ? 0.75 : 0.0);

gl_FragColor = vec4 (texture2D (sceneColor, gl_TexCoord [0].xy).rgb * light, 1.0);

}

The shadowmap projection doesn't work right. Depending on camera orientation, the shadow moves around a bit. When the camera moves close to the floor, the shadow depth values get larger until the shadow disappears. What also happens is that the shadow is projected on faces behind the light (i.e. in reverse direction). What's the reason of that all?

What I also had expected that I would have to apply the inverse of the camera's projection matrix to ndc (ndc are projected coordinates, right? If so, I thought I'd have to unproject, untranslate, unrotate from the camera view, then rotate, translate and project in the light view to access the proper shadow map value). When I however unproject ndc with the inverse camera projection, shadow mapping doesn't work at all anymore.

Images:

Rockets don't cast shadows (shadow depth too large):

http://www.descent2.de/images/temp/shadowmap3.jpg

Camera pointing forward (btw, where's that shadow artifact coming from?):

http://www.descent2.de/images/temp/shadowmap5.jpg

Camera pointed up a bit (same position) -> shadow looks different:

http://www.descent2.de/images/temp/shadowmap4.jpg

gl_TextureMatrix [2] contains light projection * light modelview * inverse (camera modelview). With that and the following shader code:

uniform sampler2D sceneColor;

uniform sampler2D sceneDepth;

uniform sampler2D shadowMap;

#define ZNEAR 1.0

#define ZFAR 5000.0

#define ZRANGE (ZFAR - ZNEAR)

#define EyeZ(screenZ) (ZFAR / ((screenZ) * ZRANGE - ZFAR))

void main()

{

float colorDepth = texture2D (sceneDepth, gl_TexCoord [0]).r;

vec4 ndc;

ndc.z = EyeZ (colorDepth);

ndc.xy = vec2 ((gl_TexCoord [0].xy - vec2 (0.5, 0.5)) * 2.0 * -ndc.z);

ndc.w = 1.0;

vec4 ls = gl_TextureMatrix [2] * ndc;

float shadowDepth = texture2DProj (shadowMap, ls).r;

float light = 0.25 + ((colorDepth < shadowDepth + 0.0005) ? 0.75 : 0.0);

gl_FragColor = vec4 (texture2D (sceneColor, gl_TexCoord [0].xy).rgb * light, 1.0);

}

The shadowmap projection doesn't work right. Depending on camera orientation, the shadow moves around a bit. When the camera moves close to the floor, the shadow depth values get larger until the shadow disappears. What also happens is that the shadow is projected on faces behind the light (i.e. in reverse direction). What's the reason of that all?

What I also had expected that I would have to apply the inverse of the camera's projection matrix to ndc (ndc are projected coordinates, right? If so, I thought I'd have to unproject, untranslate, unrotate from the camera view, then rotate, translate and project in the light view to access the proper shadow map value). When I however unproject ndc with the inverse camera projection, shadow mapping doesn't work at all anymore.

Images:

Rockets don't cast shadows (shadow depth too large):

http://www.descent2.de/images/temp/shadowmap3.jpg

Camera pointing forward (btw, where's that shadow artifact coming from?):

http://www.descent2.de/images/temp/shadowmap5.jpg

Camera pointed up a bit (same position) -> shadow looks different:

http://www.descent2.de/images/temp/shadowmap4.jpg

Dark Photon

04-15-2011, 09:15 AM

...but the X and Y EYE space conversions trouble me because I can't figure out what RIGHT and TOP are.

The args of glFrustum (http://www.opengl.org/sdk/docs/man/xhtml/glFrustum.xml) you'd otherwise pass it defining your view frustum.

(I assume near is near clip value, typically 0.5 for example when used with gluPerspective)

Right, also the args of glFrustum. Specifically, they are the negatives of eye-space Z.

Also, is there a way to remove right/near and top/near and use a value picked from the Projection matrix instead?

Probably. Give it a go.

Don't forget that you can just interpolate a view vector across your surface and use that to reconstruct the position too.

More posts by Matt Pettineo on this:

* http://mynameismjp.wordpress.com/2010/09/05/position-from-depth-3/

* http://mynameismjp.wordpress.com/2011/01/08/position-from-depth-glsl-style/

The args of glFrustum (http://www.opengl.org/sdk/docs/man/xhtml/glFrustum.xml) you'd otherwise pass it defining your view frustum.

(I assume near is near clip value, typically 0.5 for example when used with gluPerspective)

Right, also the args of glFrustum. Specifically, they are the negatives of eye-space Z.

Also, is there a way to remove right/near and top/near and use a value picked from the Projection matrix instead?

Probably. Give it a go.

Don't forget that you can just interpolate a view vector across your surface and use that to reconstruct the position too.

More posts by Matt Pettineo on this:

* http://mynameismjp.wordpress.com/2010/09/05/position-from-depth-3/

* http://mynameismjp.wordpress.com/2011/01/08/position-from-depth-glsl-style/

karx11erx

04-15-2011, 09:38 AM

No clues for me? :-/

Dark Photon

04-15-2011, 09:46 AM

You end up with

float z_eye = gl_ProjectionMatrix[3].z/(z_viewport * -2.0 + 1.0 - gl_ProjectionMatrix[2].z);

...but I ended up with

float z_eye = -gl_ProjectionMatrix[3].z/ ( (z_viewport * -2.0) + 1.0 - gl_ProjectionMatrix[2].z);

Plug some numbers and you'll see that yours isn't correct. For instance, near=1, far=9, z_viewport = 0. You should get z_eye = -near = -1. With yours you don't. You get +1 instead.

float z_eye = gl_ProjectionMatrix[3].z/(z_viewport * -2.0 + 1.0 - gl_ProjectionMatrix[2].z);

...but I ended up with

float z_eye = -gl_ProjectionMatrix[3].z/ ( (z_viewport * -2.0) + 1.0 - gl_ProjectionMatrix[2].z);

Plug some numbers and you'll see that yours isn't correct. For instance, near=1, far=9, z_viewport = 0. You should get z_eye = -near = -1. With yours you don't. You get +1 instead.

karx11erx

04-15-2011, 10:06 AM

If you look at the latest shader code I have posted, you will see that I am using your formula. (Btw, I had also noted the sign error with my formula, and fixed that).

I also tried the formula you are quoting here, but it doesn't change a thing:

uniform sampler2D sceneColor;

uniform sampler2D sceneDepth;

uniform sampler2D shadowMap;

uniform mat4 projection;

uniform vec2 screenScale; // 1.0 / window width, 1.0 / window height

void main()

{

float colorDepth = texture2D (sceneDepth, gl_TexCoord [0].xy).r;

vec4 eye;

eye.z = -projection [3].z / (colorDepth * -2.0 + 1.0 - projection [2].z);

eye.xy = vec2 ((gl_FragCoord.xy * screenScale - vec2 (0.5, 0.5)) * -2.0 * eye.z);

eye.w = 1.0;

vec4 ls = gl_TextureMatrix [2] * eye;

float shadowDepth = texture2DProj (shadowMap, ls).r;

float light = 0.25 + ((colorDepth < shadowDepth) ? 0.75 : 0.0);

gl_FragColor = vec4 (texture2D (sceneColor, gl_TexCoord [0].xy).rgb * light, 1.0);

}

I have the impression that it has something to do with the projection matrix stuff. The projection matrix isn't viewer dependent, and I am always using one and the same projection (it gets even set every frame). I just can't seem to determine what is causing my problems, so I am asking for clues here.

One thing I am not sure about is what the projection does. After the eye coordinate has been constructed, is it identical to what the modelview transformation + projection would produce, or to what only the modelview transformation would produce? In other words: Do I need to unproject the reconstructed eye coordinate (by multiplying with the inverse of the projection matrix), or not? I think I need to, but I have the suspicion that the eye coordinate reconstruction contains a few steps of unprojecting it.

I had also been looking at an explanation of the various transformation steps OpenGL does from transforming over projecting to computing screen coordinates for a vertex, and tried to reverse that, but either I have misunderstood something, or the document I have read contains errors.

I also tried the formula you are quoting here, but it doesn't change a thing:

uniform sampler2D sceneColor;

uniform sampler2D sceneDepth;

uniform sampler2D shadowMap;

uniform mat4 projection;

uniform vec2 screenScale; // 1.0 / window width, 1.0 / window height

void main()

{

float colorDepth = texture2D (sceneDepth, gl_TexCoord [0].xy).r;

vec4 eye;

eye.z = -projection [3].z / (colorDepth * -2.0 + 1.0 - projection [2].z);

eye.xy = vec2 ((gl_FragCoord.xy * screenScale - vec2 (0.5, 0.5)) * -2.0 * eye.z);

eye.w = 1.0;

vec4 ls = gl_TextureMatrix [2] * eye;

float shadowDepth = texture2DProj (shadowMap, ls).r;

float light = 0.25 + ((colorDepth < shadowDepth) ? 0.75 : 0.0);

gl_FragColor = vec4 (texture2D (sceneColor, gl_TexCoord [0].xy).rgb * light, 1.0);

}

I have the impression that it has something to do with the projection matrix stuff. The projection matrix isn't viewer dependent, and I am always using one and the same projection (it gets even set every frame). I just can't seem to determine what is causing my problems, so I am asking for clues here.

One thing I am not sure about is what the projection does. After the eye coordinate has been constructed, is it identical to what the modelview transformation + projection would produce, or to what only the modelview transformation would produce? In other words: Do I need to unproject the reconstructed eye coordinate (by multiplying with the inverse of the projection matrix), or not? I think I need to, but I have the suspicion that the eye coordinate reconstruction contains a few steps of unprojecting it.

I had also been looking at an explanation of the various transformation steps OpenGL does from transforming over projecting to computing screen coordinates for a vertex, and tried to reverse that, but either I have misunderstood something, or the document I have read contains errors.

Kelvin

04-15-2011, 09:09 PM

One thing I am not sure about is what the projection does. After the eye coordinate has been constructed, is it identical to what the modelview transformation + projection would produce, or to what only the modelview transformation would produce?

Based purely on reading your question (not your code), it is the latter (eye coordinates have only the modelview matrix applied).

See "Figure 3-2: Stages of Vertex Transformation" in this online version of an old "Red Book (http://fly.cc.fer.hr/~unreal/theredbook/chapter03.html)" for the sequence of transformations in the fixed function pipeline. The shaders are more flexible, but the basic concepts still apply.

Based purely on reading your question (not your code), it is the latter (eye coordinates have only the modelview matrix applied).

See "Figure 3-2: Stages of Vertex Transformation" in this online version of an old "Red Book (http://fly.cc.fer.hr/~unreal/theredbook/chapter03.html)" for the sequence of transformations in the fixed function pipeline. The shaders are more flexible, but the basic concepts still apply.

karx11erx

04-16-2011, 02:16 AM

I actually didn't put my question quite right. It should have been "One thing I am not sure about is what the reconstruction does." (not "projection"). Since I was referring to the reconstruction (not your fault for missing it, since I didn't put it right), the code might need to be examined. Btw, do I need to divide ls by ls.w after having computed it (gl_TextureMatrix * eye)?

Background is that I tried to do the final steps of computing window coordinates after applying modelview and projection myself to access the shadow map and see what happens:

So what I did was:

- reconstruct eye from window coord

- apply inverse camera modelview

- apply light modelview

- apply light projection

- compute window xy [xy = xy / (-2.0 * z) + vec2 (0.5, 0.5)] (basically reversing the eye construction step from above)

- access shadow map with that xy.

So the question aimed at whether applying the projection already computes window xy (from all I know and have read, it does not).

Those manually computed window coords didn't work though. Using texture2DProj doesn't work quite right, too.

I checked that I computed the matrix inverse properly to make sure the problem wasn't rooted there.

Background is that I tried to do the final steps of computing window coordinates after applying modelview and projection myself to access the shadow map and see what happens:

So what I did was:

- reconstruct eye from window coord

- apply inverse camera modelview

- apply light modelview

- apply light projection

- compute window xy [xy = xy / (-2.0 * z) + vec2 (0.5, 0.5)] (basically reversing the eye construction step from above)

- access shadow map with that xy.

So the question aimed at whether applying the projection already computes window xy (from all I know and have read, it does not).

Those manually computed window coords didn't work though. Using texture2DProj doesn't work quite right, too.

I checked that I computed the matrix inverse properly to make sure the problem wasn't rooted there.

karx11erx

04-16-2011, 04:45 AM

Ok, with some help by a friendly guy from StackOverflow.com I finally got this (principally) to work. Here's the vertex shader code for projecting a shadow map into a frame buffer as a post process:

uniform sampler2D sceneColor;

uniform sampler2D sceneDepth;

uniform sampler2D shadowMap;

uniform mat4 projectionInverse;

void main()

{

float colorDepth = texture2D (sceneDepth, gl_TexCoord [0].xy).r;

vec4 screenPos = (vec4 (gl_TexCoord [0].xy, colorDepth, 1.0) -

vec4 (0.5, 0.5, 0.5, 0.5)

) * 2.0;

vec4 eyePos = projectionInverse * screenPos;

eyePos /= eyePos.w;

vec4 lightPos = gl_TextureMatrix [2] * eyePos;

float shadowDepth = texture2DProj (shadowMap, lightPos).r;

float light = 0.25 + ((colorDepth < shadowDepth + 0.0005) ? 0.75 : 0.0);

gl_FragColor = vec4 (texture2D (sceneColor, gl_TexCoord [0].xy).rgb * light, 1.0);

}

A few problems remain, like the shadow disappearing depending on viewer position, and some artifacts.

What does texture2DProj return if the projected coordinate is outside the window space? Do I need to take care of that myself?

What would be the shader code to fully emulate/replace texture2DProj (by texture2D with the properly computed window space coordinates)?

uniform sampler2D sceneColor;

uniform sampler2D sceneDepth;

uniform sampler2D shadowMap;

uniform mat4 projectionInverse;

void main()

{

float colorDepth = texture2D (sceneDepth, gl_TexCoord [0].xy).r;

vec4 screenPos = (vec4 (gl_TexCoord [0].xy, colorDepth, 1.0) -

vec4 (0.5, 0.5, 0.5, 0.5)

) * 2.0;

vec4 eyePos = projectionInverse * screenPos;

eyePos /= eyePos.w;

vec4 lightPos = gl_TextureMatrix [2] * eyePos;

float shadowDepth = texture2DProj (shadowMap, lightPos).r;

float light = 0.25 + ((colorDepth < shadowDepth + 0.0005) ? 0.75 : 0.0);

gl_FragColor = vec4 (texture2D (sceneColor, gl_TexCoord [0].xy).rgb * light, 1.0);

}

A few problems remain, like the shadow disappearing depending on viewer position, and some artifacts.

What does texture2DProj return if the projected coordinate is outside the window space? Do I need to take care of that myself?

What would be the shader code to fully emulate/replace texture2DProj (by texture2D with the properly computed window space coordinates)?

Alfonse Reinheart

04-16-2011, 05:17 AM

A few problems remain, like the shadow disappearing depending on viewer position, and some artifacts.

That's not surprising since that code doesn't work. I'll go step by step:

vec4 screenPos = (vec4 (gl_TexCoord [0].xy, colorDepth, 1.0) -

vec4 (0.5, 0.5, 0.5, 0.5)

) * 2.0;

So, we're in a fragment shader, and gl_TexCoord[0].xy represents the a 0-1 value. The zero value should be the lower-left of the screen, and the 1 value should be the top right. "colorDepth" is the value from the depth buffer. I have no idea why you call it colorDepth, since it has nothing to do with colors, but that's not the issue.

Given those two things, this equation will compute the Normalized Device Coordinate space position. It should not be called "screenPos;" this value has nothing to do with the screen at this point.

FYI: screen space is the space relative to your monitor. Screen space positions change when you move a window around. Nothing in OpenGL uses screen space, and you'd be hard pressed to actually compute screen space coordinates unless you use windows-system dependent code.

Next:

vec4 eyePos = projectionInverse * screenPos;

And this is where your code loses the plot, so to speak.

This (http://www.arcsynthesis.org/gltut/Illumination/Tut10%20Distant%20Points%20of%20Light.html#d0e1007 5) (or this (http://www.arcsynthesis.org/gltest/Illumination/Tut10%20Distant%20Points%20of%20Light.html#d0e1011 8) for non-Firefox users) is a representation of the transform from eye-space (there, called "camera space") to window space. Notice that the first step is the multiplication of the eye-space position by the projection matrix.

This means that if you want to invert this operation, then this must be the last step you do. You must first convert your normalized device coordinate (NDC) position (which is what "screenPos" has) into clip-space before you can multiply it by the inverse projection matrix. And that's where you have your real problem.

The page I linked you to shows how to use gl_FragCoord to reverse-transform gl_FragCoord into eye-space. But gl_FragCoord has 4 components: X, Y, Z and W. The X, Y and Z are the window-space position. The W is the reciprocal of the clip-space W value. Why?

Because the difference between NDC space and clip-space is that W value. You need that W value in order to transform from NDC space to clip-space.

And your problem is that you don't have it. You have XYZ in NDC space, but without the original clip-space W value, you can't do anything.

Now, you could take the easy way out and just store it somewhere. But that's no fun. The correct answer is to compute it based on your projection matrix and the NDC space Z value.

To do this, you need to unravel your projection matrix. I don't know what your projection matrix is, but let's assume it is the standard glFrustum/gluPerspective matrix.

The projection matrix computes the clip-space Z (Zclip) by applying this (http://www.arcsynthesis.org/gltut/Positioning/Tut04%20Perspective%20Projection.html#d0e3807) (or this (http://www.arcsynthesis.org/gltest/Positioning/Tut04%20Perspective%20Projection.html#d0e3847)) equation to it. The clip-space W is just the negation of the eye-space Z (Zeye).

And we know that the NDC-space Z (Zndc) is just the Zclip/-Zeye. Well, we need to find Zeye (which is the clip-space W we need), and we have Zndc. One equation, two unknowns.

However, thanks to the above equation, we can express Zclip entirely in terms of Zeye. So if you substitute the equation for Zclip in there, then we can solve for Zeye. You don't even really need the equation per-se; just pick the values from the (non-inverted) projection matrix. The matrix just stores coefficients. Solve for Zeye, and you're done.

Once you have Zeye, you know that Wclip is -Zeye. And now that you have Wclip, you can convert the NDC position to clip-space by multiplying it by Wclip. Once there, you can transform the clip-space position through your inverse projection matrix to produce the eye-space position you need.

That's not surprising since that code doesn't work. I'll go step by step:

vec4 screenPos = (vec4 (gl_TexCoord [0].xy, colorDepth, 1.0) -

vec4 (0.5, 0.5, 0.5, 0.5)

) * 2.0;

So, we're in a fragment shader, and gl_TexCoord[0].xy represents the a 0-1 value. The zero value should be the lower-left of the screen, and the 1 value should be the top right. "colorDepth" is the value from the depth buffer. I have no idea why you call it colorDepth, since it has nothing to do with colors, but that's not the issue.

Given those two things, this equation will compute the Normalized Device Coordinate space position. It should not be called "screenPos;" this value has nothing to do with the screen at this point.

FYI: screen space is the space relative to your monitor. Screen space positions change when you move a window around. Nothing in OpenGL uses screen space, and you'd be hard pressed to actually compute screen space coordinates unless you use windows-system dependent code.

Next:

vec4 eyePos = projectionInverse * screenPos;

And this is where your code loses the plot, so to speak.

This (http://www.arcsynthesis.org/gltut/Illumination/Tut10%20Distant%20Points%20of%20Light.html#d0e1007 5) (or this (http://www.arcsynthesis.org/gltest/Illumination/Tut10%20Distant%20Points%20of%20Light.html#d0e1011 8) for non-Firefox users) is a representation of the transform from eye-space (there, called "camera space") to window space. Notice that the first step is the multiplication of the eye-space position by the projection matrix.

This means that if you want to invert this operation, then this must be the last step you do. You must first convert your normalized device coordinate (NDC) position (which is what "screenPos" has) into clip-space before you can multiply it by the inverse projection matrix. And that's where you have your real problem.

The page I linked you to shows how to use gl_FragCoord to reverse-transform gl_FragCoord into eye-space. But gl_FragCoord has 4 components: X, Y, Z and W. The X, Y and Z are the window-space position. The W is the reciprocal of the clip-space W value. Why?

Because the difference between NDC space and clip-space is that W value. You need that W value in order to transform from NDC space to clip-space.

And your problem is that you don't have it. You have XYZ in NDC space, but without the original clip-space W value, you can't do anything.

Now, you could take the easy way out and just store it somewhere. But that's no fun. The correct answer is to compute it based on your projection matrix and the NDC space Z value.

To do this, you need to unravel your projection matrix. I don't know what your projection matrix is, but let's assume it is the standard glFrustum/gluPerspective matrix.

The projection matrix computes the clip-space Z (Zclip) by applying this (http://www.arcsynthesis.org/gltut/Positioning/Tut04%20Perspective%20Projection.html#d0e3807) (or this (http://www.arcsynthesis.org/gltest/Positioning/Tut04%20Perspective%20Projection.html#d0e3847)) equation to it. The clip-space W is just the negation of the eye-space Z (Zeye).

And we know that the NDC-space Z (Zndc) is just the Zclip/-Zeye. Well, we need to find Zeye (which is the clip-space W we need), and we have Zndc. One equation, two unknowns.

However, thanks to the above equation, we can express Zclip entirely in terms of Zeye. So if you substitute the equation for Zclip in there, then we can solve for Zeye. You don't even really need the equation per-se; just pick the values from the (non-inverted) projection matrix. The matrix just stores coefficients. Solve for Zeye, and you're done.

Once you have Zeye, you know that Wclip is -Zeye. And now that you have Wclip, you can convert the NDC position to clip-space by multiplying it by Wclip. Once there, you can transform the clip-space position through your inverse projection matrix to produce the eye-space position you need.

karx11erx

04-16-2011, 07:25 AM

The last code I have posted does work (at least to some extent): As long as the shadow is visible, it stays at the right spot. That's more than I had before. The shadow might disappear because of the W problem you have pointed out, but I am not quite convinced that this is the problem. I admit I am not exactly a 3D math wizard though (or I might not have needed to ask for help here).

The artifacts may be caused by accessing the shadow map with coordinates outside of it (i.e. outside of the light's screen space). I guess the artifacts are a result of the texture being created with GL_CLAMP. This conclusion is the result of the observation that they only appear when the shadow reaches the confines of the light's frustum.

I called it "colorDepth" because it's the depth value corresponding to the frame's color buffer, but thank you for taking your time to point out to me that this was a stupid idea.

Thank you also for explaining me how idiotically misleading the term "screen space" is when I should have used "window space" (btw, doesn't this thread's title clearly say that)?

I can't say how glad I am that the GLSL compiler understood my intentions despite all these blatant faults of mine.

So this is the relevant code?

vec3 CalcCameraSpacePosition()

{

vec3 ndcPos;

ndcPos.xy = ((gl_FragCoord.xy / windowSize.xy) * 2.0) - 1.0;

ndcPos.z = (2.0 * gl_FragCoord.z - depthRange.x - depthRange.y) / (depthRange.y - depthRange.x);

vec4 clipPos;

clipPos.w = 1.0f / gl_FragCoord.w;

clipPos.xyz = ndcPos.xyz * clipPos.w;

return vec3(clipToCameraMatrix * clipPos);

}

Since I am simply rendering a fullscreen quad, my "gl_TexCoord [0]" should be the same as your "gl_FragCoord.xy / windowSize.xy".

I don't have gl_FragCoord.z or gl_FragCoord.w. gl_FragCoord.z would by my "windowZ" (from the depth buffer)?

I can solve the equation, but wouldn't know how to do this, since I don't know which values I'd actually have to pick:

You don't even really need the equation per-se; just pick the values from the (non-inverted) projection matrix. The matrix just stores coefficients. Solve for Zeye, and you're done.

And isn't v.xyzw == vec4 (v.xyz / v.w, 1.0) for all purposes of transformation and projection?

Unless I completely screwed up, here's the equation's solution:

A = ZNear + ZFar

B = ZNear - ZFar

C = 2 * ZNear * ZFar

clip_w = -eye_z

clip_z = (eye_z * A) / B + C / B = (eye_z * A + C) / B

ndc_z = clip_z / clip_w

=> ndc_z = ((eye_z * A + C) / B) / -eye_z

=> -eye_z * ndc_z = (eye_z * A + C) / B

=> -eye_z * ndc_z * B = eye_z * A + C

D = ndc_z * B

=> -eye_z * D = eye_z * A + C

=> 0 = eye_z * A + eye_z * D + C

=> 0 = eye_z * (A + D) + C

=> -C = eye_z * (A + D)

=> eye_z = -C / (A + D)

=> eye_z = -2 * ZNear * ZFar / (ZNear + ZFar + ndc_z * (ZNear - ZFar))

Here's the shader (I hope the variable names are to your taste):

uniform sampler2D sceneColor;

uniform sampler2D sceneDepth;

uniform sampler2D shadowMap;

uniform mat4 projectionInverse;

#define ZNEAR 1.0

#define ZFAR 5000.0

#define A (ZNEAR + ZFAR)

#define B (ZNEAR - ZFAR)

#define C (2.0 * ZNEAR * ZFAR)

#define D (ndcPos.z * B)

#define ZEYE (-C / (A + D))

void main()

{

float fragDepth = texture2D (sceneDepth, gl_TexCoord [0].xy).r;

vec3 ndcPos = vec3 (2.0 * gl_TexCoord [0].xy - 1.0, (2.0 * fragDepth - ZNEAR - ZFAR) / (ZFAR - ZNEAR));

vec4 clipPos;

clipPos.w = -ZEYE;

clipPos.xyz = ndcPos * clipPos.w;

vec4 eyePos = projectionInverse * clipPos;

vec4 lightPos = gl_TextureMatrix [2] * eyePos;

float shadowDepth = texture2DProj (shadowMap, lightPos).r;

float light = 0.25 + ((fragDepth < shadowDepth) ? 0.75 : 0.0);

gl_FragColor = vec4 (texture2D (sceneColor, gl_TexCoord [0].xy).rgb * light, 1.0);

}

It doesn't work (no shadows).

The artifacts may be caused by accessing the shadow map with coordinates outside of it (i.e. outside of the light's screen space). I guess the artifacts are a result of the texture being created with GL_CLAMP. This conclusion is the result of the observation that they only appear when the shadow reaches the confines of the light's frustum.

I called it "colorDepth" because it's the depth value corresponding to the frame's color buffer, but thank you for taking your time to point out to me that this was a stupid idea.

Thank you also for explaining me how idiotically misleading the term "screen space" is when I should have used "window space" (btw, doesn't this thread's title clearly say that)?

I can't say how glad I am that the GLSL compiler understood my intentions despite all these blatant faults of mine.

So this is the relevant code?

vec3 CalcCameraSpacePosition()

{

vec3 ndcPos;

ndcPos.xy = ((gl_FragCoord.xy / windowSize.xy) * 2.0) - 1.0;

ndcPos.z = (2.0 * gl_FragCoord.z - depthRange.x - depthRange.y) / (depthRange.y - depthRange.x);

vec4 clipPos;

clipPos.w = 1.0f / gl_FragCoord.w;

clipPos.xyz = ndcPos.xyz * clipPos.w;

return vec3(clipToCameraMatrix * clipPos);

}

Since I am simply rendering a fullscreen quad, my "gl_TexCoord [0]" should be the same as your "gl_FragCoord.xy / windowSize.xy".

I don't have gl_FragCoord.z or gl_FragCoord.w. gl_FragCoord.z would by my "windowZ" (from the depth buffer)?

I can solve the equation, but wouldn't know how to do this, since I don't know which values I'd actually have to pick:

You don't even really need the equation per-se; just pick the values from the (non-inverted) projection matrix. The matrix just stores coefficients. Solve for Zeye, and you're done.

And isn't v.xyzw == vec4 (v.xyz / v.w, 1.0) for all purposes of transformation and projection?

Unless I completely screwed up, here's the equation's solution:

A = ZNear + ZFar

B = ZNear - ZFar

C = 2 * ZNear * ZFar

clip_w = -eye_z

clip_z = (eye_z * A) / B + C / B = (eye_z * A + C) / B

ndc_z = clip_z / clip_w

=> ndc_z = ((eye_z * A + C) / B) / -eye_z

=> -eye_z * ndc_z = (eye_z * A + C) / B

=> -eye_z * ndc_z * B = eye_z * A + C

D = ndc_z * B

=> -eye_z * D = eye_z * A + C

=> 0 = eye_z * A + eye_z * D + C

=> 0 = eye_z * (A + D) + C

=> -C = eye_z * (A + D)

=> eye_z = -C / (A + D)

=> eye_z = -2 * ZNear * ZFar / (ZNear + ZFar + ndc_z * (ZNear - ZFar))

Here's the shader (I hope the variable names are to your taste):

uniform sampler2D sceneColor;

uniform sampler2D sceneDepth;

uniform sampler2D shadowMap;

uniform mat4 projectionInverse;

#define ZNEAR 1.0

#define ZFAR 5000.0

#define A (ZNEAR + ZFAR)

#define B (ZNEAR - ZFAR)

#define C (2.0 * ZNEAR * ZFAR)

#define D (ndcPos.z * B)

#define ZEYE (-C / (A + D))

void main()

{

float fragDepth = texture2D (sceneDepth, gl_TexCoord [0].xy).r;

vec3 ndcPos = vec3 (2.0 * gl_TexCoord [0].xy - 1.0, (2.0 * fragDepth - ZNEAR - ZFAR) / (ZFAR - ZNEAR));

vec4 clipPos;

clipPos.w = -ZEYE;

clipPos.xyz = ndcPos * clipPos.w;

vec4 eyePos = projectionInverse * clipPos;

vec4 lightPos = gl_TextureMatrix [2] * eyePos;

float shadowDepth = texture2DProj (shadowMap, lightPos).r;

float light = 0.25 + ((fragDepth < shadowDepth) ? 0.75 : 0.0);

gl_FragColor = vec4 (texture2D (sceneColor, gl_TexCoord [0].xy).rgb * light, 1.0);

}

It doesn't work (no shadows).

Alfonse Reinheart

04-16-2011, 10:02 AM

So this is the relevant code?

I'm not sure how you got that code; I said that you had computed the NDC space position correctly (even though the variable that held it was misnamed).

It's the clip-space computation (the part where you did the division) that you got wrong.

I can solve the equation, but wouldn't know how to do this, since I don't know which values I'd actually have to pick:

It's vector/matrix multiplication; it's just shorthand for a linear system of equations. Do the multiplication by hand and see which values affect the Zclip output. Then pick those values out of the projection matrix.

And isn't v.xyzw == vec4 (v.xyz / v.w, 1.0) for all purposes of transformation and projection?

What's "v"? If "v" is the clip-space position, you're correct. The problem is that you don't have the clip-space position yet.

And the inverse of that operation is

v.xyzw = vec4(v.xyz * v.w, v.w);

You need v.w to perform the inverse operation. And you will note that, in the operation as you stated it, v.w is lost.

I hope the variable names are to your taste

It's not about being "to [my] taste"; it's about being accurate and self-documenting. It's more for your convenience than anything. I can't count the number of times that correct variable names have helped me figure out what some 6-month-old code was doing, or that bad variable names have obfuscated the intent of code.

I'm not sure how you got that code; I said that you had computed the NDC space position correctly (even though the variable that held it was misnamed).

It's the clip-space computation (the part where you did the division) that you got wrong.

I can solve the equation, but wouldn't know how to do this, since I don't know which values I'd actually have to pick:

It's vector/matrix multiplication; it's just shorthand for a linear system of equations. Do the multiplication by hand and see which values affect the Zclip output. Then pick those values out of the projection matrix.

And isn't v.xyzw == vec4 (v.xyz / v.w, 1.0) for all purposes of transformation and projection?

What's "v"? If "v" is the clip-space position, you're correct. The problem is that you don't have the clip-space position yet.

And the inverse of that operation is

v.xyzw = vec4(v.xyz * v.w, v.w);

You need v.w to perform the inverse operation. And you will note that, in the operation as you stated it, v.w is lost.

I hope the variable names are to your taste

It's not about being "to [my] taste"; it's about being accurate and self-documenting. It's more for your convenience than anything. I can't count the number of times that correct variable names have helped me figure out what some 6-month-old code was doing, or that bad variable names have obfuscated the intent of code.

karx11erx

04-16-2011, 10:15 AM

I got that code from one of the pages you had linked to.

What I wanted to say is that your comments about the variable names I have chosen were completely inappropriate and uncalled for. I understood the names well, and from the very simple code they were used in it was quite clear what they were.

You aren't seriously proposing that I am gonna start to manually do matrix computations to find out which coefficients to use, do you?

Please stop playing the teacher here, because that is how you are coming across.

Anyway, I coded that fragment shader above according to your comments, and - Tada! - no shadows at all anymore.

It was exactly that step from ndc to clip space coordinates that I was unsure about. I had read a lot of stuff about what computations OpenGL does when, and I had noticed that I should know about that w, but all my attempts to apply that had failed.

Btw, funny enough the shader you criticized as being wrong produced shadows for me.

Edit:

I said that you had computed the NDC space position correctly.

I should probably rest for a while and do something completely different ... :(

The code works quite well now, thanks for the pointers about w. There's still something a little fishy (shadow disappears when viewer moves out) though.

What I wanted to say is that your comments about the variable names I have chosen were completely inappropriate and uncalled for. I understood the names well, and from the very simple code they were used in it was quite clear what they were.

You aren't seriously proposing that I am gonna start to manually do matrix computations to find out which coefficients to use, do you?

Please stop playing the teacher here, because that is how you are coming across.

Anyway, I coded that fragment shader above according to your comments, and - Tada! - no shadows at all anymore.

It was exactly that step from ndc to clip space coordinates that I was unsure about. I had read a lot of stuff about what computations OpenGL does when, and I had noticed that I should know about that w, but all my attempts to apply that had failed.

Btw, funny enough the shader you criticized as being wrong produced shadows for me.

Edit:

I said that you had computed the NDC space position correctly.

I should probably rest for a while and do something completely different ... :(

The code works quite well now, thanks for the pointers about w. There's still something a little fishy (shadow disappears when viewer moves out) though.

Alfonse Reinheart

04-16-2011, 10:52 AM

I got that code from one of the pages you had linked to.

Oh right. Sorry, I'm apparently still half-asleep.

What I wanted to say is that your comments about the variable names I have chosen were completely inappropriate and uncalled for. I understood the names well, and from the very simple code they were used in it was quite clear what they were.

What is inappropriate about pointing out that they're wrong? It doesn't matter if you understand them today; it's still wrong.

If you have this code:

vec4 normal = gl_Position;

It is certainly syntactically correct. But it is both misleading and confusing for anyone trying to read it. You may happen to understand it, but it's still wrong.

You aren't seriously proposing that I am gonna start to manually do matrix computations to find out which coefficients to use, do you?

Well, that's the brute force way. The more elegant way is to actually look at the matrix and see from inspection which values are used to compute Zclip and which ones are not. A matrix multiplication is just a linear system of equations for computing the output values.

Or, if you just want someone to give you the answer, assuming you're using the standard glFrustum/gluPerspective projection matrices, it's the last two columns of the third row of the matrix (assuming standard mathematical matrix conventions). In OpenGL terms, it's "Zclip = projmat[2][2] * Zeye + projmat[2][3]".

Oh right. Sorry, I'm apparently still half-asleep.

What I wanted to say is that your comments about the variable names I have chosen were completely inappropriate and uncalled for. I understood the names well, and from the very simple code they were used in it was quite clear what they were.

What is inappropriate about pointing out that they're wrong? It doesn't matter if you understand them today; it's still wrong.

If you have this code:

vec4 normal = gl_Position;

It is certainly syntactically correct. But it is both misleading and confusing for anyone trying to read it. You may happen to understand it, but it's still wrong.

You aren't seriously proposing that I am gonna start to manually do matrix computations to find out which coefficients to use, do you?

Well, that's the brute force way. The more elegant way is to actually look at the matrix and see from inspection which values are used to compute Zclip and which ones are not. A matrix multiplication is just a linear system of equations for computing the output values.

Or, if you just want someone to give you the answer, assuming you're using the standard glFrustum/gluPerspective projection matrices, it's the last two columns of the third row of the matrix (assuming standard mathematical matrix conventions). In OpenGL terms, it's "Zclip = projmat[2][2] * Zeye + projmat[2][3]".

Dark Photon

04-16-2011, 12:47 PM

No clues for me? :-/

Sorry. Just ran out of time and had to quit for the day.

I guess the artifacts are a result of the texture being created with GL_CLAMP.

Probably a good bet. GL_CLAMP samples from the "border color" when you get close to the edge, which is nonsensical for a depth map (AFAIK). Maybe you can set a border color of 100% white and have the shadow comparisons always fail past the edges with this (as that might be interpreted as the light space far-clip value). Don't know.

But if you're in doubt as to whether this is causing you problems, I'd be tempted to try GL_CLAMP_TO_EDGE -- this ignores the border color and merely clamps to the edges of the texture. Then if that's it, and you need something more elegant, you can add it.

Sorry. Just ran out of time and had to quit for the day.

I guess the artifacts are a result of the texture being created with GL_CLAMP.

Probably a good bet. GL_CLAMP samples from the "border color" when you get close to the edge, which is nonsensical for a depth map (AFAIK). Maybe you can set a border color of 100% white and have the shadow comparisons always fail past the edges with this (as that might be interpreted as the light space far-clip value). Don't know.

But if you're in doubt as to whether this is causing you problems, I'd be tempted to try GL_CLAMP_TO_EDGE -- this ignores the border color and merely clamps to the edges of the texture. Then if that's it, and you need something more elegant, you can add it.

ZbuffeR

04-16-2011, 12:54 PM

karx11erx no need to become angry : when you ask for people to help and read your code, and to do it for free, it does mean you will receive feedback, good or bad.

Sorry I can't actually help you, but AFAIK everything posted by Alfonse makes sense. Something "almost working" may well be a dead-end, and a better "not yet working" solution can actually be nearer the correct solution. Maybe there is something before the fragment shader that messes with .w ?

Sorry I can't actually help you, but AFAIK everything posted by Alfonse makes sense. Something "almost working" may well be a dead-end, and a better "not yet working" solution can actually be nearer the correct solution. Maybe there is something before the fragment shader that messes with .w ?

karx11erx

04-16-2011, 02:42 PM

Ok, thanks Alphonse and Dark_Photon.

I am still having a problem though: The depth values from the shadow map seem to be a tad too large. The effect is that when I move the player ship close to a wall its shadow is projected on, the shadow disappears.

Window dimensions and projection settings are identical for light view and camera view.

Any ideas what could be wrong here?

ZBuffer,

I had already noticed the mistake I had made.

Sure, Alphonse is a very knowledgeable person and has given me the most valuable (and in fact the only correct) information about the subject covered here, and I appreciate his help. It would have been more of a pleasure though if he had not given comments about variable names I find rather pointless and a bit smartassed, when the topic is a completely different one. If you look at the variable I had named "colorDepth" you will see that just because Alphonse didn't understand why I had called it that way it still made sense (it's the depth value associated with the scene's color buffer). This also is neither "production" nor complex code. The tone also plays a role here. Saying "btw, it might be a good idea naming that variable fragDepth, since ..." would have made a big difference.

Great skill and knowledge are no excuse for a lack of good manners. So, yes, this is a public place, and everybody can comment and give feedback, including me.

I am still having a problem though: The depth values from the shadow map seem to be a tad too large. The effect is that when I move the player ship close to a wall its shadow is projected on, the shadow disappears.

Window dimensions and projection settings are identical for light view and camera view.

Any ideas what could be wrong here?

ZBuffer,

I had already noticed the mistake I had made.

Sure, Alphonse is a very knowledgeable person and has given me the most valuable (and in fact the only correct) information about the subject covered here, and I appreciate his help. It would have been more of a pleasure though if he had not given comments about variable names I find rather pointless and a bit smartassed, when the topic is a completely different one. If you look at the variable I had named "colorDepth" you will see that just because Alphonse didn't understand why I had called it that way it still made sense (it's the depth value associated with the scene's color buffer). This also is neither "production" nor complex code. The tone also plays a role here. Saying "btw, it might be a good idea naming that variable fragDepth, since ..." would have made a big difference.

Great skill and knowledge are no excuse for a lack of good manners. So, yes, this is a public place, and everybody can comment and give feedback, including me.

Dark Photon

04-16-2011, 04:15 PM

Window dimensions and projection settings are identical for light view and camera view.

Any ideas what could be wrong here?

Just off hand, that sounds odd. Shadow map window (viewport) dimensions should match the shadow map res, while camera view window (viewport) dimensions should match the window you're rendering into, and they're often not exactly the same.

And that the projection settings match sounds odd too, unless the FOV you are using for your light just happens to be exactly the same as the FOV for your camera, and they both happen to be symmetric perspective. Also, they camera and light will be in different positions pointing in different directions, so that alone will give you different projections for camera frustum vs. light frustum.

I'll rescan your notes above to see if I see anything on a second go-round.

Any ideas what could be wrong here?

Just off hand, that sounds odd. Shadow map window (viewport) dimensions should match the shadow map res, while camera view window (viewport) dimensions should match the window you're rendering into, and they're often not exactly the same.

And that the projection settings match sounds odd too, unless the FOV you are using for your light just happens to be exactly the same as the FOV for your camera, and they both happen to be symmetric perspective. Also, they camera and light will be in different positions pointing in different directions, so that alone will give you different projections for camera frustum vs. light frustum.

I'll rescan your notes above to see if I see anything on a second go-round.

karx11erx

04-16-2011, 05:19 PM

Afaik the projection doesn't depend on view direction. The modelview takes care of that. The projection just clips the transformed vertices, and prepares ndc calculation.

I have different dimensions and FOV for camera and shadow maps (shadow maps have bigger FOV and square frustum), but to make sure everything is identical as far as possible, I gave the shadow maps the same window dimensions and FOV as the camera view.

I cannot tell why shadow map depth values are too large though.

I have different dimensions and FOV for camera and shadow maps (shadow maps have bigger FOV and square frustum), but to make sure everything is identical as far as possible, I gave the shadow maps the same window dimensions and FOV as the camera view.

I cannot tell why shadow map depth values are too large though.

Dark Photon

04-16-2011, 05:21 PM

I am still having a problem though: The depth values from the shadow map seem to be a tad too large. The effect is that when I move the player ship close to a wall its shadow is projected on, the shadow disappears.

You can tell a lot from "how" it disappears. Does it pop off? Does it look like the object is gradually being "sliced away" to nothing? If the former, then you've probably stopped culling into your shadow map draw pass. If the latter, then you've probably computed your shadow light-space near and/or far clip planes wrong when rendering the shadow map, or have your transforms wrong when applying them.

You can more easily see what you're doing wrong if you implement a debug mode where you render the shadow map onto the window, with black = near value and white = far value. For instance:

http://www.geeks3d.com/20091216/geexlab-how-to-visualize-the-depth-buffer-in-glsl/

Look at the picture but ignore the shader math -- think when I last traced through it it wasn't right. You can use what you already do know to compute eye-space Z from the shadow map and render that.

You can tell a lot from "how" it disappears. Does it pop off? Does it look like the object is gradually being "sliced away" to nothing? If the former, then you've probably stopped culling into your shadow map draw pass. If the latter, then you've probably computed your shadow light-space near and/or far clip planes wrong when rendering the shadow map, or have your transforms wrong when applying them.

You can more easily see what you're doing wrong if you implement a debug mode where you render the shadow map onto the window, with black = near value and white = far value. For instance:

http://www.geeks3d.com/20091216/geexlab-how-to-visualize-the-depth-buffer-in-glsl/

Look at the picture but ignore the shader math -- think when I last traced through it it wasn't right. You can use what you already do know to compute eye-space Z from the shadow map and render that.

Dark Photon

04-16-2011, 05:27 PM

Afaik the projection doesn't depend on view direction. The modelview takes care of that. The projection just clips the transformed vertices, and prepares ndc calculation.

Oops. Yeah, you're right of course. Was thinking about the whole frustum, not just the projection. My bad.

In addition FOV and symmetric perspective needing to be the same, you'd also have to have the same near and far clip in your light and eye frustums to have the same projection, and I wouldn't think that's a given. The behavior you're describing sounds like your light space near plane might be "slicing away" the shadow caster to where it fails to land in the shadow map. Rendering that debug view of the shadow map will show you much more clearly what's going on.

Oops. Yeah, you're right of course. Was thinking about the whole frustum, not just the projection. My bad.

In addition FOV and symmetric perspective needing to be the same, you'd also have to have the same near and far clip in your light and eye frustums to have the same projection, and I wouldn't think that's a given. The behavior you're describing sounds like your light space near plane might be "slicing away" the shadow caster to where it fails to land in the shadow map. Rendering that debug view of the shadow map will show you much more clearly what's going on.

karx11erx

04-16-2011, 05:27 PM

The latter happens. It looks like the shadow by and by moves through the solid geometry and disappears behind it (so to speak) as the shadow caster approaches it.

Z near and Z far never change in my application. I've got them fixed at 1.0 and 5000.0.

Z near and Z far are mapped to 0.0 and 1.0 respectively during depth calculation. So the depth buffer contents automatically is somewhere between 0.0 and 1.0. I have been rendering the shadow map already, but you cannot tell whether it's a bit off. (Or I have completely failed to understand you.)

I wouldn't know how to screw up the transforms. After rendering, I simply read the OpenGL matrices. To invert them, I am using the inversion function code from MESA. Of course, if something's wrong there, it would explain the problems. Is there a way to have OpenGL invert the matrices and then read them? Hm ... Google might be my friend here ... no, it isn't.

If there are numerical differences between floating point handling on a GPU and the Intel/AMD FPUs I'd be in trouble.

Does OpenGL natively store the matrices as double or float?

Z near and Z far never change in my application. I've got them fixed at 1.0 and 5000.0.

Z near and Z far are mapped to 0.0 and 1.0 respectively during depth calculation. So the depth buffer contents automatically is somewhere between 0.0 and 1.0. I have been rendering the shadow map already, but you cannot tell whether it's a bit off. (Or I have completely failed to understand you.)

I wouldn't know how to screw up the transforms. After rendering, I simply read the OpenGL matrices. To invert them, I am using the inversion function code from MESA. Of course, if something's wrong there, it would explain the problems. Is there a way to have OpenGL invert the matrices and then read them? Hm ... Google might be my friend here ... no, it isn't.

If there are numerical differences between floating point handling on a GPU and the Intel/AMD FPUs I'd be in trouble.

Does OpenGL natively store the matrices as double or float?

Dark Photon

04-16-2011, 05:35 PM

By the way, sounds like above you were a little sketchy on the space transforms involved. If still a bit sketchy, at this URL:

http://www.paulsprojects.net/tutorials/smt/smt.html

there is a good diagram of where your starting, where you're going, and how you get there:

http://www.paulsprojects.net/tutorials/smt/spaces.jpg

If still problems, post the latest version of your shader code. It's not clear whether you've modified since you last posted.

http://www.paulsprojects.net/tutorials/smt/smt.html

there is a good diagram of where your starting, where you're going, and how you get there:

http://www.paulsprojects.net/tutorials/smt/spaces.jpg

If still problems, post the latest version of your shader code. It's not clear whether you've modified since you last posted.

Dark Photon

04-16-2011, 05:39 PM

The latter happens. It looks like the shadow by and by moves through the solid geometry and disappears behind it (so to speak) as the shadow caster approaches it.

Z near and Z far never change in my application. I've got them fixed at 1.0 and 5000.0.

And does your shadow caster ever get within 1 unit of distance of the light source? (If so, ...ouch.)

Z near and Z far never change in my application. I've got them fixed at 1.0 and 5000.0.

And does your shadow caster ever get within 1 unit of distance of the light source? (If so, ...ouch.)

karx11erx

04-16-2011, 05:42 PM

uniform sampler2D sceneColor;

uniform sampler2D sceneDepth;

uniform sampler2D shadowMap;

uniform mat4 projectionInverse;

#define ZNEAR 1.0

#define ZFAR 5000.0

#define A (ZNEAR + ZFAR)

#define B (ZNEAR - ZFAR)

#define C (2.0 * ZNEAR * ZFAR)

#define D (ndcPos.z * B)

#define ZEYE (-C / (A + D))

void main()

{

float fragDepth = texture2D (sceneDepth, gl_TexCoord [0].xy).r;

vec3 ndcPos = (vec3 (gl_TexCoord [0].xy, fragDepth) - 0.5) * 2.0;

vec4 clipPos;

clipPos.w = -ZEYE;

clipPos.xyz = ndcPos * clipPos.w;

vec4 lightClipPos = gl_TextureMatrix [2] * eyePos;

float shadowDepth = texture2DProj (shadowMap, lightClipPos).r;

float light = 0.25 + ((fragDepth < shadowDepth) ? 0.75 : 0.0);

gl_FragColor = vec4 (texture2D (sceneColor, gl_TexCoord [0].xy).rgb * light, 1.0);

}

gl_TextureMatrix [2] contains light projection * light model view * inverse camera model view * inverse camera projection. That's how I directly get from camera to light clip coordinates.

I am rendering scene and shadow map as fullscreen quads, hence the usage of gl_TexCoord [0].

uniform sampler2D sceneDepth;

uniform sampler2D shadowMap;

uniform mat4 projectionInverse;

#define ZNEAR 1.0

#define ZFAR 5000.0

#define A (ZNEAR + ZFAR)

#define B (ZNEAR - ZFAR)

#define C (2.0 * ZNEAR * ZFAR)

#define D (ndcPos.z * B)

#define ZEYE (-C / (A + D))

void main()

{

float fragDepth = texture2D (sceneDepth, gl_TexCoord [0].xy).r;

vec3 ndcPos = (vec3 (gl_TexCoord [0].xy, fragDepth) - 0.5) * 2.0;

vec4 clipPos;

clipPos.w = -ZEYE;

clipPos.xyz = ndcPos * clipPos.w;

vec4 lightClipPos = gl_TextureMatrix [2] * eyePos;

float shadowDepth = texture2DProj (shadowMap, lightClipPos).r;

float light = 0.25 + ((fragDepth < shadowDepth) ? 0.75 : 0.0);

gl_FragColor = vec4 (texture2D (sceneColor, gl_TexCoord [0].xy).rgb * light, 1.0);

}

gl_TextureMatrix [2] contains light projection * light model view * inverse camera model view * inverse camera projection. That's how I directly get from camera to light clip coordinates.

I am rendering scene and shadow map as fullscreen quads, hence the usage of gl_TexCoord [0].

Dark Photon

04-16-2011, 06:34 PM

...

void main()

{

float fragDepth = texture2D (sceneDepth, gl_TexCoord [0].xy).r;

vec3 ndcPos = (vec3 (gl_TexCoord [0].xy, fragDepth) - 0.5) * 2.0;

vec4 clipPos;

clipPos.w = -ZEYE;

clipPos.xyz = ndcPos * clipPos.w;

// ------ and magic happens ------

vec4 lightClipPos = gl_TextureMatrix [2] * eyePos;

float shadowDepth = texture2DProj (shadowMap, lightClipPos).r;

float light = 0.25 + ((fragDepth < shadowDepth) ? 0.75 : 0.0);

gl_FragColor = vec4 (texture2D (sceneColor, gl_TexCoord [0].xy).rgb * light, 1.0);

}

gl_TextureMatrix [2] contains light projection * light model view * inverse camera model view * inverse camera projection. That's how I directly get from camera to light clip coordinates.

Ok, that shader shouldn't even compile. See where I've inserted "and magic happens". We computed clipPos. Then in the next line it uses eyePos. However, from your gl_TextureMatrix[2] description, "camera clip coordinates" (clipPos) is actually what you want here.

Also, just to clarify terminology, this transform should be:

gl_TextureMatrix [2] = NDC-to-window-space-matrix * light projection * light viewing * inverse camera viewing * inverse camera projection

There are no object coordinates involved here, and thus no modeling transforms. Peeling off the transforms in reverse order, here are the spaces we start at and bounce through with each successive transform:

camera CLIP-SPACE coordinates ->

camera EYE-SPACE coordinates ->

WORLD-SPACE coordinates ->

light EYE-SPACE coordinates ->

light CLIP-SPACE coordinates ->

light WINDOW-SPACE coordinates

And after being explicit about that, I think I see one problem. You didn't say you included the (-1..1) -> (0..1) "NDC-to-window-space" matrix in your gl_TextureMatrix[2], so with the texture2DProj (which does the .w divide to take your clip coords to NDC coords), you'd be looking up into the shadow map with -1..1 NDC texcoords. That's not right. You need to look up int 0..1 window-space texcoords. Since you said you're seeing reasonable shadows except when you move too close, I have to assume that you included this matrix, but didn't mention it (?)

Note that this matrix scales not only X and Y but Z as well, to position your depths in the 0..1 range for proper shadow map comparisons. If you're missing this, I could see where you might see some strange depth comparison results possibly explaining what you're seeing.

See "scale and bias matrix" here:

http://en.wikipedia.org/wiki/Shadow_mapping

for what I'm talking about with the "NDC-to-window-space" matrix.

void main()

{

float fragDepth = texture2D (sceneDepth, gl_TexCoord [0].xy).r;

vec3 ndcPos = (vec3 (gl_TexCoord [0].xy, fragDepth) - 0.5) * 2.0;

vec4 clipPos;

clipPos.w = -ZEYE;

clipPos.xyz = ndcPos * clipPos.w;

// ------ and magic happens ------

vec4 lightClipPos = gl_TextureMatrix [2] * eyePos;

float shadowDepth = texture2DProj (shadowMap, lightClipPos).r;

float light = 0.25 + ((fragDepth < shadowDepth) ? 0.75 : 0.0);

gl_FragColor = vec4 (texture2D (sceneColor, gl_TexCoord [0].xy).rgb * light, 1.0);

}

gl_TextureMatrix [2] contains light projection * light model view * inverse camera model view * inverse camera projection. That's how I directly get from camera to light clip coordinates.

Ok, that shader shouldn't even compile. See where I've inserted "and magic happens". We computed clipPos. Then in the next line it uses eyePos. However, from your gl_TextureMatrix[2] description, "camera clip coordinates" (clipPos) is actually what you want here.

Also, just to clarify terminology, this transform should be:

gl_TextureMatrix [2] = NDC-to-window-space-matrix * light projection * light viewing * inverse camera viewing * inverse camera projection

There are no object coordinates involved here, and thus no modeling transforms. Peeling off the transforms in reverse order, here are the spaces we start at and bounce through with each successive transform:

camera CLIP-SPACE coordinates ->

camera EYE-SPACE coordinates ->

WORLD-SPACE coordinates ->

light EYE-SPACE coordinates ->

light CLIP-SPACE coordinates ->

light WINDOW-SPACE coordinates

And after being explicit about that, I think I see one problem. You didn't say you included the (-1..1) -> (0..1) "NDC-to-window-space" matrix in your gl_TextureMatrix[2], so with the texture2DProj (which does the .w divide to take your clip coords to NDC coords), you'd be looking up into the shadow map with -1..1 NDC texcoords. That's not right. You need to look up int 0..1 window-space texcoords. Since you said you're seeing reasonable shadows except when you move too close, I have to assume that you included this matrix, but didn't mention it (?)

Note that this matrix scales not only X and Y but Z as well, to position your depths in the 0..1 range for proper shadow map comparisons. If you're missing this, I could see where you might see some strange depth comparison results possibly explaining what you're seeing.

See "scale and bias matrix" here:

http://en.wikipedia.org/wiki/Shadow_mapping

for what I'm talking about with the "NDC-to-window-space" matrix.

Dark Photon

04-16-2011, 06:59 PM

...

void main()

{

float fragDepth = texture2D (sceneDepth, gl_TexCoord [0].xy).r;

vec3 ndcPos = (vec3 (gl_TexCoord [0].xy, fragDepth) - 0.5) * 2.0;

...

vec4 lightClipPos = gl_TextureMatrix [2] * eyePos;

float shadowDepth = texture2DProj (shadowMap, lightClipPos).r;

float light = 0.25 + ((fragDepth < shadowDepth) ? 0.75 : 0.0);

gl_FragColor = vec4 (texture2D (sceneColor, gl_TexCoord [0].xy).rgb * light, 1.0);

}

Also, another bug. And the bigger one. This dovetails from my last comment, which comes from not being very deliberate about what space you're in.

In the "fragDepth < shadowDepth" line, you're comparing depth values in two different spaces!!! That's a big problem. fragDepth is a "camera WINDOW-SPACE" depth value (0..1). And shadowDepth is a "light WINDOW-SPACE" depth value (0..1). These aren't the same space, so this comparison is nonsensical.

What you need to do is add that "scale and bias" matrix (i.e. NDC-to-window-space" matrix) to your transform chain. After multiplying by gl_TextureMatrix[2], this'll give you a "light WINDOW-SPACE" x, y, and z (depth) value. And instead of doing "fragDepth < shadowDepth" as your test, you do "lightWinPos.z < shadowDepth". That is:

vec4 lightWinPos = gl_TextureMatrix [2] * clipPos;

float shadowDepth = texture2DProj (shadowMap, lightWinPos.xyw).r;

float light = 0.25 + ((lightWinPos.z < shadowDepth) ? 0.75 : 0.0);

It is for exactly this reason (easy to get confused) that when I'm passing positions and normals around, I always use the convention pos_<space> or normal_<space> for the variable identifiers so I can keep it straight what space they're in. For instance:

pos_win (implicitly camera frame relative)

pos_clip (" " ")

pos_eye (" " ")

pos_lt_eye (now light frame relative)

pos_lt_clip (" " ")

pos_lt_win (" " ")

Much harder to trip up this way, and even if you do, much easier to spot errors when tracing the code.

void main()

{

float fragDepth = texture2D (sceneDepth, gl_TexCoord [0].xy).r;

vec3 ndcPos = (vec3 (gl_TexCoord [0].xy, fragDepth) - 0.5) * 2.0;

...

vec4 lightClipPos = gl_TextureMatrix [2] * eyePos;

float shadowDepth = texture2DProj (shadowMap, lightClipPos).r;

float light = 0.25 + ((fragDepth < shadowDepth) ? 0.75 : 0.0);

gl_FragColor = vec4 (texture2D (sceneColor, gl_TexCoord [0].xy).rgb * light, 1.0);

}

Also, another bug. And the bigger one. This dovetails from my last comment, which comes from not being very deliberate about what space you're in.

In the "fragDepth < shadowDepth" line, you're comparing depth values in two different spaces!!! That's a big problem. fragDepth is a "camera WINDOW-SPACE" depth value (0..1). And shadowDepth is a "light WINDOW-SPACE" depth value (0..1). These aren't the same space, so this comparison is nonsensical.

What you need to do is add that "scale and bias" matrix (i.e. NDC-to-window-space" matrix) to your transform chain. After multiplying by gl_TextureMatrix[2], this'll give you a "light WINDOW-SPACE" x, y, and z (depth) value. And instead of doing "fragDepth < shadowDepth" as your test, you do "lightWinPos.z < shadowDepth". That is:

vec4 lightWinPos = gl_TextureMatrix [2] * clipPos;

float shadowDepth = texture2DProj (shadowMap, lightWinPos.xyw).r;

float light = 0.25 + ((lightWinPos.z < shadowDepth) ? 0.75 : 0.0);

It is for exactly this reason (easy to get confused) that when I'm passing positions and normals around, I always use the convention pos_<space> or normal_<space> for the variable identifiers so I can keep it straight what space they're in. For instance:

pos_win (implicitly camera frame relative)

pos_clip (" " ")

pos_eye (" " ")

pos_lt_eye (now light frame relative)

pos_lt_clip (" " ")

pos_lt_win (" " ")

Much harder to trip up this way, and even if you do, much easier to spot errors when tracing the code.

karx11erx

04-17-2011, 12:26 AM

It was 2:30 am for me when I posted this, and I've got a particular bad case of influenza.

uniform sampler2D sceneColor;

uniform sampler2D sceneDepth;

uniform sampler2D shadowMap;

uniform mat4 projectionInverse;

#define ZNEAR 1.0

#define ZFAR 5000.0

#define A (ZNEAR + ZFAR)

#define B (ZNEAR - ZFAR)

#define C (2.0 * ZNEAR * ZFAR)

#define D (ndcPos.z * B)

#define ZEYE (-C / (A + D))

void main()

{

float fragDepth = texture2D (sceneDepth, gl_TexCoord [0].xy).r;

vec3 ndcPos = (vec3 (gl_TexCoord [0].xy, fragDepth) - 0.5) * 2.0;

vec4 cameraClipPos;

cameraClipPos.w = -ZEYE;

cameraClipPos.xyz = ndcPos * cameraClipPos.w;

vec4 lightClipPos = gl_TextureMatrix [2] * cameraClipPos;

float shadowDepth = texture2DProj (shadowMap, lightClipPos).r;

float light = 0.25 + ((fragDepth < shadowDepth) ? 0.75 : 0.0);

gl_FragColor = vec4 (texture2D (sceneColor, gl_TexCoord [0].xy).rgb * light, 1.0);

}

Here is the magic:

gl_TextureMatrix [2] contains bias * light projection * light model view * inverse camera model view * inverse camera projection. That's how I directly get from camera to light clip coordinates.

I forgot to mention the bias matrix in my erroneous post, but it is involved.

vec4 lightWinPos = gl_TextureMatrix [2] * clipPos;

As Alphonse has pointed out, that variable name is misleading, as after the multiplication with gl_TextureMatrix [2] you are having the light's clip coordinate, not the window coordinate.

uniform sampler2D sceneColor;

uniform sampler2D sceneDepth;

uniform sampler2D shadowMap;

uniform mat4 projectionInverse;

#define ZNEAR 1.0

#define ZFAR 5000.0

#define A (ZNEAR + ZFAR)

#define B (ZNEAR - ZFAR)

#define C (2.0 * ZNEAR * ZFAR)

#define D (ndcPos.z * B)

#define ZEYE (-C / (A + D))

void main()

{

float fragDepth = texture2D (sceneDepth, gl_TexCoord [0].xy).r;

vec3 ndcPos = (vec3 (gl_TexCoord [0].xy, fragDepth) - 0.5) * 2.0;

vec4 cameraClipPos;

cameraClipPos.w = -ZEYE;

cameraClipPos.xyz = ndcPos * cameraClipPos.w;

vec4 lightClipPos = gl_TextureMatrix [2] * cameraClipPos;

float shadowDepth = texture2DProj (shadowMap, lightClipPos).r;

float light = 0.25 + ((fragDepth < shadowDepth) ? 0.75 : 0.0);

gl_FragColor = vec4 (texture2D (sceneColor, gl_TexCoord [0].xy).rgb * light, 1.0);

}

Here is the magic:

gl_TextureMatrix [2] contains bias * light projection * light model view * inverse camera model view * inverse camera projection. That's how I directly get from camera to light clip coordinates.

I forgot to mention the bias matrix in my erroneous post, but it is involved.

vec4 lightWinPos = gl_TextureMatrix [2] * clipPos;

As Alphonse has pointed out, that variable name is misleading, as after the multiplication with gl_TextureMatrix [2] you are having the light's clip coordinate, not the window coordinate.

karx11erx

04-17-2011, 02:38 PM

I was wrong about lightWinPos. Since the bias matrix has been applied, gl_TextureMatrix [2] * cameraClipPos does indeed yield the light window space coordinate.

Dark Photon

04-17-2011, 03:20 PM

It was 2:30 am for me when I posted this, and I've got a particular bad case of influenza

Ouch! Sorry to hear that. :(

Here is the magic: gl_TextureMatrix [2] contains bias * light projection * light model view * inverse camera model view * inverse camera projection. That's how I directly get from camera to light clip coordinates.

I forgot to mention the bias matrix in my erroneous post, but it is involved.

Ok. As you mostly said in your second post above, the bias matrix actually takes 4D light CLIP-SPACE to a 4D light WINDOW-SPACE. Then when you do the perspective divide, you're in a 3D light WINDOW-SPACE.

Which also highlight's another important point: that being, instead of your shadow comparison being: "(lightWinPos.z < shadowDepth)", it probably needs to be:

(lightWinPos.z/lightWinPos.w < shadowDepth)

so you're comparing 3D light WINDOW-SPACE depth to 3D light WINDOW-SPACE depth.

Or, to do the same thing in a slightly different form which gets rid of the potential divide-by-zero and ugly denormalized numbers creeping into your math:

(lightWinPos.z < lightWinPos.w * shadowDepth)

Intuitively that makes sense.

Pardon my not being crystal clear here and iteratively working with you toward the solution as I haven't actually implemented point light source shadows, just directional light source shadows (where there is no perspective involved in the light projection).

Ouch! Sorry to hear that. :(

Here is the magic: gl_TextureMatrix [2] contains bias * light projection * light model view * inverse camera model view * inverse camera projection. That's how I directly get from camera to light clip coordinates.

I forgot to mention the bias matrix in my erroneous post, but it is involved.

Ok. As you mostly said in your second post above, the bias matrix actually takes 4D light CLIP-SPACE to a 4D light WINDOW-SPACE. Then when you do the perspective divide, you're in a 3D light WINDOW-SPACE.

Which also highlight's another important point: that being, instead of your shadow comparison being: "(lightWinPos.z < shadowDepth)", it probably needs to be:

(lightWinPos.z/lightWinPos.w < shadowDepth)

so you're comparing 3D light WINDOW-SPACE depth to 3D light WINDOW-SPACE depth.

Or, to do the same thing in a slightly different form which gets rid of the potential divide-by-zero and ugly denormalized numbers creeping into your math:

(lightWinPos.z < lightWinPos.w * shadowDepth)

Intuitively that makes sense.

Pardon my not being crystal clear here and iteratively working with you toward the solution as I haven't actually implemented point light source shadows, just directional light source shadows (where there is no perspective involved in the light projection).

karx11erx

04-17-2011, 03:55 PM

I just mentioned that to explain why I am making such stupid mistakes. It's hard to think when you're tired and your head hurts. :)

Thank you very much for your help so far.

I am not comparing lightWinPos.z with shadowDepth, but the depth value from the corresponding scene buffer fragment though. The big question for me is why that doesn't seem to work right.

Edit:

Your comments have led me on the right track. This shader does the trick (bias not in gl_TextureMatrix [2]):

uniform sampler2D sceneColor;

uniform sampler2D sceneDepth;

uniform sampler2D shadowMap;

#define ZNEAR 1.0

#define ZFAR 5000.0

#define A 5001.0 //(ZNEAR + ZFAR)

#define B 4999.0 //(ZNEAR - ZFAR)

#define C 10000.0 //(2.0 * ZNEAR * ZFAR)

#define D (cameraNDC.z * B)

#define ZEYE -10000.0 / (5001.0 + cameraNDC.z * 4999.0) //-(C / (A + D))

void main() {

float fragDepth = texture2D (sceneDepth, gl_TexCoord [0].xy).r;

vec3 cameraNDC = (vec3 (gl_TexCoord [0].xy, fragDepth) - 0.5) * 2.0;

vec4 cameraClipPos;

cameraClipPos.w = -ZEYE;

cameraClipPos.xyz = cameraNDC * cameraClipPos.w;

vec4 lightClipPos = gl_TextureMatrix [2] * cameraClipPos;

vec3 lightNDC = (lightClipPos.xyz / lightClipPos.w) * 0.5 + 0.5;

float shadowDepth = texture2D (shadowMap, lightNDC).r;

float light = 0.25 + ((lightNDC < shadowDepth) ? 0.75 : 0.0);

gl_FragColor = vec4 (texture2D (sceneColor, gl_TexCoord [0].xy).rgb * light, 1.0);

}

My suspicion is that texture2DProj (which I had used before) hasn't worked right (probably because I haven't setup texture generation beforehand?)

Thank you very much for your help so far.

I am not comparing lightWinPos.z with shadowDepth, but the depth value from the corresponding scene buffer fragment though. The big question for me is why that doesn't seem to work right.

Edit:

Your comments have led me on the right track. This shader does the trick (bias not in gl_TextureMatrix [2]):

uniform sampler2D sceneColor;

uniform sampler2D sceneDepth;

uniform sampler2D shadowMap;

#define ZNEAR 1.0

#define ZFAR 5000.0

#define A 5001.0 //(ZNEAR + ZFAR)

#define B 4999.0 //(ZNEAR - ZFAR)

#define C 10000.0 //(2.0 * ZNEAR * ZFAR)

#define D (cameraNDC.z * B)

#define ZEYE -10000.0 / (5001.0 + cameraNDC.z * 4999.0) //-(C / (A + D))

void main() {

float fragDepth = texture2D (sceneDepth, gl_TexCoord [0].xy).r;

vec3 cameraNDC = (vec3 (gl_TexCoord [0].xy, fragDepth) - 0.5) * 2.0;

vec4 cameraClipPos;

cameraClipPos.w = -ZEYE;

cameraClipPos.xyz = cameraNDC * cameraClipPos.w;

vec4 lightClipPos = gl_TextureMatrix [2] * cameraClipPos;

vec3 lightNDC = (lightClipPos.xyz / lightClipPos.w) * 0.5 + 0.5;

float shadowDepth = texture2D (shadowMap, lightNDC).r;

float light = 0.25 + ((lightNDC < shadowDepth) ? 0.75 : 0.0);

gl_FragColor = vec4 (texture2D (sceneColor, gl_TexCoord [0].xy).rgb * light, 1.0);

}

My suspicion is that texture2DProj (which I had used before) hasn't worked right (probably because I haven't setup texture generation beforehand?)

Dark Photon

04-17-2011, 06:38 PM

Your comments have led me on the right track. This shader does the trick (bias not in gl_TextureMatrix [2]):

...

vec4 lightClipPos = gl_TextureMatrix [2] * cameraClipPos;

vec3 lightNDC = (lightClipPos.xyz / lightClipPos.w) * 0.5 + 0.5;

float shadowDepth = texture2D (shadowMap, lightNDC).r;

float light = 0.25 + ((lightNDC < shadowDepth) ? 0.75 : 0.0);

gl_FragColor = vec4 (texture2D (sceneColor, gl_TexCoord [0].xy).rgb * light, 1.0);

}

Good deal! That's lookin pretty good! And it addresses most of my concerns. Ignoring optimizations, just a couple things you might consider tweaking for readability and correctness:

1) "texture2D( shadowMap, lightNDC )" I'd use lightNDC.xy to be explicit. Frankly, I'm not sure how it compiles as-is, since texture2D only accepts a vec2...

2) "lightNDC". That variable actually contains light WINDOW-SPACE coords, since you roll the 0.5/0.5 bias into it. So I might name it lightWin.

3) "(lightNDC < shadowDepth)". I'd use "(lightNDC.z < shadowDepth)" to be explicit. And frankly I'm a bit surprised that it even compiles without that.

4) "(lightClipPos.xyz / lightClipPos.w)". This does take you to light NDC-SPACE. But what about the case where the fragment position is at z=0 in light EYE-SPACE? That is, it's in the plane of the light source? There dividing by lightClipPos.w will give you a divide by zero, introducing a nasty denormalized number in your shader and/or causing all hell to break lose with your math. Probably a good idea to protect against that. See my previous post for one method for doing this.

5) You might consider turning on depth comparisons for your texture, using a sampler2DShadow instead of sampler2D, and doing the lookup with a shadow2D (or shadow2DProj) and let the hardware do the depth comparison for you! With this you can get PCF filtering of your shadow lookups for free on some hardware merely by setting LINEAR filtering on the depth texture!

My suspicion is that texture2DProj (which I had used before) hasn't worked right (probably because I haven't setup texture generation beforehand?)

I don't think so. texture2DProj has nothing to do with texgen (texture coordinate generation), and in fact when you plug in your own shaders you've effectively disabled the built-in texgen.

Functionally, texture2DProj is simple: texture2DProj = texture2D, but with texcoord.xyz divided by texcoord.w internally (if you pass it a vec4 texcoord) before the texture lookup (and comparison, if enabled) is performed.

I "think" the only reason why texture2DProj exists is because there at least used to be (maybe still is) dedicated hardware in the GPU to do the divide for you if you wanted it, so it "might" be a little faster to use texture2DProj than to use texture2D and do your own divide in the shader.

...

vec4 lightClipPos = gl_TextureMatrix [2] * cameraClipPos;

vec3 lightNDC = (lightClipPos.xyz / lightClipPos.w) * 0.5 + 0.5;

float shadowDepth = texture2D (shadowMap, lightNDC).r;

float light = 0.25 + ((lightNDC < shadowDepth) ? 0.75 : 0.0);

gl_FragColor = vec4 (texture2D (sceneColor, gl_TexCoord [0].xy).rgb * light, 1.0);

}

Good deal! That's lookin pretty good! And it addresses most of my concerns. Ignoring optimizations, just a couple things you might consider tweaking for readability and correctness:

1) "texture2D( shadowMap, lightNDC )" I'd use lightNDC.xy to be explicit. Frankly, I'm not sure how it compiles as-is, since texture2D only accepts a vec2...

2) "lightNDC". That variable actually contains light WINDOW-SPACE coords, since you roll the 0.5/0.5 bias into it. So I might name it lightWin.

3) "(lightNDC < shadowDepth)". I'd use "(lightNDC.z < shadowDepth)" to be explicit. And frankly I'm a bit surprised that it even compiles without that.

4) "(lightClipPos.xyz / lightClipPos.w)". This does take you to light NDC-SPACE. But what about the case where the fragment position is at z=0 in light EYE-SPACE? That is, it's in the plane of the light source? There dividing by lightClipPos.w will give you a divide by zero, introducing a nasty denormalized number in your shader and/or causing all hell to break lose with your math. Probably a good idea to protect against that. See my previous post for one method for doing this.

5) You might consider turning on depth comparisons for your texture, using a sampler2DShadow instead of sampler2D, and doing the lookup with a shadow2D (or shadow2DProj) and let the hardware do the depth comparison for you! With this you can get PCF filtering of your shadow lookups for free on some hardware merely by setting LINEAR filtering on the depth texture!

My suspicion is that texture2DProj (which I had used before) hasn't worked right (probably because I haven't setup texture generation beforehand?)

I don't think so. texture2DProj has nothing to do with texgen (texture coordinate generation), and in fact when you plug in your own shaders you've effectively disabled the built-in texgen.

Functionally, texture2DProj is simple: texture2DProj = texture2D, but with texcoord.xyz divided by texcoord.w internally (if you pass it a vec4 texcoord) before the texture lookup (and comparison, if enabled) is performed.

I "think" the only reason why texture2DProj exists is because there at least used to be (maybe still is) dedicated hardware in the GPU to do the divide for you if you wanted it, so it "might" be a little faster to use texture2DProj than to use texture2D and do your own divide in the shader.

karx11erx

04-18-2011, 02:20 AM

I was using lightNDC.z in the comparison. The GLSL compiler automatically casts non-vec2 parameters to texture2D to vec2 and issues a warning about it. I only see the warnings when I actually have errors in a shader and examine the compiler output.

I think that I will still get in hot water when z == 0.0 because I have to divide lightClipPos.xy by it to access the shadow depth.

I think lightWinPos should have xy scaled with the actual viewport dimensions. So my ndcPOS is somewhere between true NDC and window coordinates, isn't it?

shadow2DProj compares against the depth from the frame buffer, doesn't it? And that doesn't work for me.

When I am where I want to get with my shadow mapping, I will blur the shadow maps with a Gaussian blur shader which I expect to look better than PCF generated ones.

I think that I will still get in hot water when z == 0.0 because I have to divide lightClipPos.xy by it to access the shadow depth.

I think lightWinPos should have xy scaled with the actual viewport dimensions. So my ndcPOS is somewhere between true NDC and window coordinates, isn't it?

shadow2DProj compares against the depth from the frame buffer, doesn't it? And that doesn't work for me.

When I am where I want to get with my shadow mapping, I will blur the shadow maps with a Gaussian blur shader which I expect to look better than PCF generated ones.

BionicBytes

04-18-2011, 02:46 AM

shadow2DProj compares against the depth from the frame buffer, doesn't it? And that doesn't work for me.

Not necessarily as you have stated it.

The orange book states:

shadow2Dproj computes the texture coord.z / coord.w and compares the third texture coord component (.z) with the value read from the bound depth sampler you provided.

Therefore it's important you have the correct 3rd component for the shadow texture coord and a proper depth texture bound.

Not necessarily as you have stated it.

The orange book states:

shadow2Dproj computes the texture coord.z / coord.w and compares the third texture coord component (.z) with the value read from the bound depth sampler you provided.

Therefore it's important you have the correct 3rd component for the shadow texture coord and a proper depth texture bound.

karx11erx

04-18-2011, 03:32 AM

I have bound the current scene's color and depth buffers and the shadow map (need the color buffer because the shader outputs scene + shadow).

So shadow2DProj compares scene depth to shadow depth for me, doesn't it?

So shadow2DProj compares scene depth to shadow depth for me, doesn't it?

BionicBytes

04-18-2011, 05:16 AM

I have bound the current scene's color and depth buffers and the shadow map (need the color buffer because the shader outputs scene + shadow).

Is that 3 bound textures then: color + depth buffer + shadow map?

Usually when rendering a shadow, you setup the camera and projection from the lights point of view and render the scene into an FBO which only contains a depth attachment. The only time you would have a colour attachment to this FBO is when you are using an alternative shadow generation technique where you don't just need gl_Position.z stored - but some custom values instead, eg VSM or SAVSM in which case the colour attachment is a 32-bit two channel float.

What you then call the 'shadowmap' is up to you: for hardware PCF shadowmapping the FBO depth buffer is the shadowmap; for VSM the colour attachment is the shadowmap.

So shadow2DProj compares scene depth to shadow depth for me, doesn't it?

Becareful with what you intend here - as I tried to point out previously. The texture coord you supply is inportant because the texcoord.z will be compared with the bound texture sampler value. If you don't want this behaviour then switch to texture2DProj instead. The advantage of shadow2DProj is the h/w assisted z/w divide and the h/w assisted PCF filtering and compare.

Back to your question - I don't know. It depends upon your texure coordinate (z component) and which of the 3 textures you bound as the 'shadow map'. I can tell you that shadow2DProj is expecting a depth_component texture to be bound and decalared as sampler2DShadow and so is not a suitable instruction for VSM shadowmapping because you need to bind the RG32F colour attachment instead.

Therefore you should be able to answer the question yourself:

1. Have I bound the light FBO depth buffer texture (a depth_component format )

2. Have I declared the sampler as sampler2DShadow

3. Have I enabled GL_LINEAR filtering (for free h/w PCF filtering)

4. Have I set the texture parameters:

TEXTURE_COMPARE_MODE := GL_COMPARE_R_TO_TEXTURE;

TEXTURE_COMPARE_FUNC := GL_LEQUAL;

DEPTH_TEXTURE_MODE := GL_LUMINANCE;

5. Have I set the clamp modes to CLAMP_TO_EDGE or CLAMP_TO_BORDER (and set the border colour to white) to prevent shadow out of bound errors

6. Am I generating the shadow texture coordinates correctly so that the Proj will divide the z/w for me and my z component will be compared to the depth written in the depth texture.

You see on note 6, if you need to play around with the depth value comming out of the depth texture (to covert it in any way- eg to NDC space), then shadow2DProj or shadow2D is not going to work. You'd need to use texture2DProj instead.

Is that 3 bound textures then: color + depth buffer + shadow map?

Usually when rendering a shadow, you setup the camera and projection from the lights point of view and render the scene into an FBO which only contains a depth attachment. The only time you would have a colour attachment to this FBO is when you are using an alternative shadow generation technique where you don't just need gl_Position.z stored - but some custom values instead, eg VSM or SAVSM in which case the colour attachment is a 32-bit two channel float.

What you then call the 'shadowmap' is up to you: for hardware PCF shadowmapping the FBO depth buffer is the shadowmap; for VSM the colour attachment is the shadowmap.

So shadow2DProj compares scene depth to shadow depth for me, doesn't it?

Becareful with what you intend here - as I tried to point out previously. The texture coord you supply is inportant because the texcoord.z will be compared with the bound texture sampler value. If you don't want this behaviour then switch to texture2DProj instead. The advantage of shadow2DProj is the h/w assisted z/w divide and the h/w assisted PCF filtering and compare.

Back to your question - I don't know. It depends upon your texure coordinate (z component) and which of the 3 textures you bound as the 'shadow map'. I can tell you that shadow2DProj is expecting a depth_component texture to be bound and decalared as sampler2DShadow and so is not a suitable instruction for VSM shadowmapping because you need to bind the RG32F colour attachment instead.

Therefore you should be able to answer the question yourself:

1. Have I bound the light FBO depth buffer texture (a depth_component format )

2. Have I declared the sampler as sampler2DShadow

3. Have I enabled GL_LINEAR filtering (for free h/w PCF filtering)

4. Have I set the texture parameters:

TEXTURE_COMPARE_MODE := GL_COMPARE_R_TO_TEXTURE;

TEXTURE_COMPARE_FUNC := GL_LEQUAL;

DEPTH_TEXTURE_MODE := GL_LUMINANCE;

5. Have I set the clamp modes to CLAMP_TO_EDGE or CLAMP_TO_BORDER (and set the border colour to white) to prevent shadow out of bound errors

6. Am I generating the shadow texture coordinates correctly so that the Proj will divide the z/w for me and my z component will be compared to the depth written in the depth texture.

You see on note 6, if you need to play around with the depth value comming out of the depth texture (to covert it in any way- eg to NDC space), then shadow2DProj or shadow2D is not going to work. You'd need to use texture2DProj instead.

karx11erx

04-18-2011, 05:32 AM

BionicBytes,

I don't know whether you have read the entire thread, but I am trying to render a shadow into a frame buffer as post process. So I render the shadow map as depth only in one render pass, then render the scene w/o shadow, then apply to shadow map to the scene - kind of deferring shadowing.

The post process shader blends the shadow map into the scene and returns the darkened or lit scene fragments depending on whether they are in shadow or not.

I don't know whether you have read the entire thread, but I am trying to render a shadow into a frame buffer as post process. So I render the shadow map as depth only in one render pass, then render the scene w/o shadow, then apply to shadow map to the scene - kind of deferring shadowing.

The post process shader blends the shadow map into the scene and returns the darkened or lit scene fragments depending on whether they are in shadow or not.

BionicBytes

04-18-2011, 06:33 AM

I have implemented deferred shadowing into my engine to complement the deferred lighting.

In this way not only is the lighting decoupled from the geometry, but the shadow generation techniques (VSM, PCF, SAVSM, CVSM, etc) are decoupled from the lighting shaders.

After the various shadow maps have been created for each scene light (using VSM, PCF, etc) a 2D post process is used to create a shadow mask - this is a 4 channel RGBA8 texture which will be used to gather upto 4 scene light shadow contributions and then accessed during the lighting phase. It is only during this post process where the shadow comparisions take place and the results of the comparisions are written to a colour texture (aka the shadow mask) and contain 'shadow occlusion values'. This texture can be blurred safely unlike shadow maps.

During the lighting phase, the shadow mask texture (a RGBA8 colour) is then bound and accessed in the various lighting shaders and the beauty is that I only ever need to access the RGBA8 shadow mask texture and therefore only need one varient of the lighting shader no matter which technique is used to generate the shadows in the first place.

Now...I have been reading this thread with great interest. I may actually have been slow off the mark, but I did not realise you were creating a deferred shadow system (although you did say something about a post-process which I did not cotton on to). Does your system match what I am doing (which came from Crysis and other games)?

The reason why I ask all of this is that in the deferred system I store eye-space vertex positions of the geometry in the G-buffer (rather than having to reconstruct from scene depth). When rendering the scene from the lights POV the gl_Vertex will get transformed into eye-space. Therefore to calculate the shadow map texture coordinates you need the scale bias * lightProjection matrix * light view matrix * Inv scene camera matrix * gl_Vertex of G-Bufffer

I use the following calculation to generate a matrix to pass to the shadow compare shader (the one creating the post-process shadow mask)

Procedure setShadowMatrix (var projection,view: TMatrix);

const offset: GLMatrixf = (0.5,0,0,0, 0,0.5,0,0, 0,0,0.5,0, 0.5,0.5,0.5,1);

begin

glloadmatrixf (@offset[0]); //convert clip space to texture space

glMultMatrixf (@projection.glmatrixf[0]); //light’s projection

glMultMatrixf (@view.glmatrixf[0]); //light’s camera

glMultMatrixf (@CameraMatrix_inv.glmatrixf[0]); //scene inv camera

glGetFloatv(GL_TEXTURE_MATRIX, @shadowmatrix.glmatrixf[0]);

end;

Hence what I just said above: scale bias * lightProjection matrix * light view matrix * Inv scene camera matrix

The idea here is to end up in eye space because of the next peice below:

//--------shadowing apply: texture compare GLSL shader snippet----------------------------------------------

//Shadow Texturematrix[0]=scale_bias * light project matrix * light camera view * scene camera view_inverse

shadowCoord = gl_TextureMatrix[0] * vec4 (ecEyeVertex.xyz, 1.0); //ecEyeVertex.w must be 1.0 or projected shadows not correct

shadowCoordPostW = shadowCoord / shadowCoord.w; //only need this when sampler is not shadow2D variant

The idea in the shadow compare is to compare the z of the original scene (eye-space position as stored in the G-buffer) against the lights z value (in the shadow map texture). The trick is to ensure the computed shadow texture coordinates contain the original scenes verterx at any one pixel. Since my G-Buffer stores eye space vertex position I needed to undo the original eye-space camera translation (hence the multiply by inv camera) to obtain object-space gl_Vertex.xyz for the original scene

This is accomplished with: gl_TextureMatrix[0] * vec4 (ecEyeVertex.xyz, 1.0);

So now I have: Shadow Texture coords=scale_bias * light project matrix * light camera view * scene camera view_inverse * ecEyeVertex.xyz

so I have obtained the projected position of the orginal scene vertex by the LIGHTs camera (light eye-space) and converted to texture coordinates.

This is ready to be compared to the lights depth texture using Shadow2Dproj command.

So to be explicit, the texture coordinates now contain the standard scene vertex - but transformed by the lights camera

and the shadow map contains the scene depth - transformed by the lights camera.

Both of these are in texture-space [0..1] range, due to the scale bias * lightProjection matrix transforms, and because both are in the same space the comparison is valid.

OK, so why the long post?

Well I think you may have tried to shortcut the process by directly going into clip space (just my opinion). You have also tried to compute the eye-space of the vertex from NDC. The problem is that each step along the way needs to be verified and checked. Since you generally can't debug GLSL - it's impossible to check & hence some of the problems.

I have tried to explain what I do and in doing so help you with yours even if I am using eye-space for everything and the convienience of the deferred G-Buffer. When I first started all of this I was convinced that OpenGL fixed functionality was nuts doing everything in eye space, and that I would be better of using what ever space I wanted. But, more and more, eye-space is very convienient for all sorts of reasons. perhaps I am suggesting you do things in eye-space through out and that WILL simplify all your calculations and comparisions.

I would like to see you getting this to work in with the least amount of effort and time (even if that means eye-space for now). Later on you can show us all just how to do this in NDC or clip space and show us why that's better (even if it's just a convienience for you).

In this way not only is the lighting decoupled from the geometry, but the shadow generation techniques (VSM, PCF, SAVSM, CVSM, etc) are decoupled from the lighting shaders.

After the various shadow maps have been created for each scene light (using VSM, PCF, etc) a 2D post process is used to create a shadow mask - this is a 4 channel RGBA8 texture which will be used to gather upto 4 scene light shadow contributions and then accessed during the lighting phase. It is only during this post process where the shadow comparisions take place and the results of the comparisions are written to a colour texture (aka the shadow mask) and contain 'shadow occlusion values'. This texture can be blurred safely unlike shadow maps.

During the lighting phase, the shadow mask texture (a RGBA8 colour) is then bound and accessed in the various lighting shaders and the beauty is that I only ever need to access the RGBA8 shadow mask texture and therefore only need one varient of the lighting shader no matter which technique is used to generate the shadows in the first place.

Now...I have been reading this thread with great interest. I may actually have been slow off the mark, but I did not realise you were creating a deferred shadow system (although you did say something about a post-process which I did not cotton on to). Does your system match what I am doing (which came from Crysis and other games)?

The reason why I ask all of this is that in the deferred system I store eye-space vertex positions of the geometry in the G-buffer (rather than having to reconstruct from scene depth). When rendering the scene from the lights POV the gl_Vertex will get transformed into eye-space. Therefore to calculate the shadow map texture coordinates you need the scale bias * lightProjection matrix * light view matrix * Inv scene camera matrix * gl_Vertex of G-Bufffer

I use the following calculation to generate a matrix to pass to the shadow compare shader (the one creating the post-process shadow mask)

Procedure setShadowMatrix (var projection,view: TMatrix);

const offset: GLMatrixf = (0.5,0,0,0, 0,0.5,0,0, 0,0,0.5,0, 0.5,0.5,0.5,1);

begin

glloadmatrixf (@offset[0]); //convert clip space to texture space

glMultMatrixf (@projection.glmatrixf[0]); //light’s projection

glMultMatrixf (@view.glmatrixf[0]); //light’s camera

glMultMatrixf (@CameraMatrix_inv.glmatrixf[0]); //scene inv camera

glGetFloatv(GL_TEXTURE_MATRIX, @shadowmatrix.glmatrixf[0]);

end;

Hence what I just said above: scale bias * lightProjection matrix * light view matrix * Inv scene camera matrix

The idea here is to end up in eye space because of the next peice below:

//--------shadowing apply: texture compare GLSL shader snippet----------------------------------------------

//Shadow Texturematrix[0]=scale_bias * light project matrix * light camera view * scene camera view_inverse

shadowCoord = gl_TextureMatrix[0] * vec4 (ecEyeVertex.xyz, 1.0); //ecEyeVertex.w must be 1.0 or projected shadows not correct

shadowCoordPostW = shadowCoord / shadowCoord.w; //only need this when sampler is not shadow2D variant

The idea in the shadow compare is to compare the z of the original scene (eye-space position as stored in the G-buffer) against the lights z value (in the shadow map texture). The trick is to ensure the computed shadow texture coordinates contain the original scenes verterx at any one pixel. Since my G-Buffer stores eye space vertex position I needed to undo the original eye-space camera translation (hence the multiply by inv camera) to obtain object-space gl_Vertex.xyz for the original scene

This is accomplished with: gl_TextureMatrix[0] * vec4 (ecEyeVertex.xyz, 1.0);

So now I have: Shadow Texture coords=scale_bias * light project matrix * light camera view * scene camera view_inverse * ecEyeVertex.xyz

so I have obtained the projected position of the orginal scene vertex by the LIGHTs camera (light eye-space) and converted to texture coordinates.

This is ready to be compared to the lights depth texture using Shadow2Dproj command.

So to be explicit, the texture coordinates now contain the standard scene vertex - but transformed by the lights camera

and the shadow map contains the scene depth - transformed by the lights camera.

Both of these are in texture-space [0..1] range, due to the scale bias * lightProjection matrix transforms, and because both are in the same space the comparison is valid.

OK, so why the long post?

Well I think you may have tried to shortcut the process by directly going into clip space (just my opinion). You have also tried to compute the eye-space of the vertex from NDC. The problem is that each step along the way needs to be verified and checked. Since you generally can't debug GLSL - it's impossible to check & hence some of the problems.

I have tried to explain what I do and in doing so help you with yours even if I am using eye-space for everything and the convienience of the deferred G-Buffer. When I first started all of this I was convinced that OpenGL fixed functionality was nuts doing everything in eye space, and that I would be better of using what ever space I wanted. But, more and more, eye-space is very convienient for all sorts of reasons. perhaps I am suggesting you do things in eye-space through out and that WILL simplify all your calculations and comparisions.

I would like to see you getting this to work in with the least amount of effort and time (even if that means eye-space for now). Later on you can show us all just how to do this in NDC or clip space and show us why that's better (even if it's just a convienience for you).

karx11erx

04-18-2011, 08:03 AM

Thanks for your input. I am doing this similarly, I just compute the vertex in the shader for now. I have been considering storing camera space coordinates in a second render target when rendering my scene, but that is getting complicated because I am applying a few shaders already when rendering the scene; I haven't spent much though yet on how to add a shader that would only fill an extra render target with the coordinates, or expand all the other shaders appropriately (and I want to keep the number of render passes as low as possible).

Shadow map blending actually works quite well now. The next step will be to render the shadow maps to an empty color buffer using the scene depths to properly discard fragments, blur the resulting color buffer (which should contain the shadows as RGB), and then just slap that texture on the scene - just to have soft blurred textures.

My far goal is to implement deferred lighting with shadow maps. I will need to change shadow map handling for that a bit, but until then the above route is the one I have chosen to go.

The biggest problem I am facing right now that rendering a shadow map with a FOV of 180 deg doesn't show a 180 deg view of the scene the shadow light source should see. I think I will switch to singular (for directed lights) and dual (for point lights) paraboloid shadow maps since these give you a 360 deg mapping, but save you 2 - 4 shadow map calculations required for cubic shadow maps (http://wiki.delphigl.com/index.php/GLSL_Licht_und_Schatten#Beschleunigtes_Rendern; German - sorry). Another limitiation I will add is that only moving lights will cast shadows: In my application these are the only light sources I have no lightmaps for; so the app has to do the full lighting for these - hooking up shadow maps there seems logical. These lights will also create moving shadows which should add a lot of dynamics to the game. Since the game has a lot of light sources, only the lights closest to the player will cast shadows, and the shadow will be lighter the further away it is from the light source. That should avoid too hard effects with shadows suddenly popping in and out of the scene.

Shadow map blending actually works quite well now. The next step will be to render the shadow maps to an empty color buffer using the scene depths to properly discard fragments, blur the resulting color buffer (which should contain the shadows as RGB), and then just slap that texture on the scene - just to have soft blurred textures.

My far goal is to implement deferred lighting with shadow maps. I will need to change shadow map handling for that a bit, but until then the above route is the one I have chosen to go.

The biggest problem I am facing right now that rendering a shadow map with a FOV of 180 deg doesn't show a 180 deg view of the scene the shadow light source should see. I think I will switch to singular (for directed lights) and dual (for point lights) paraboloid shadow maps since these give you a 360 deg mapping, but save you 2 - 4 shadow map calculations required for cubic shadow maps (http://wiki.delphigl.com/index.php/GLSL_Licht_und_Schatten#Beschleunigtes_Rendern; German - sorry). Another limitiation I will add is that only moving lights will cast shadows: In my application these are the only light sources I have no lightmaps for; so the app has to do the full lighting for these - hooking up shadow maps there seems logical. These lights will also create moving shadows which should add a lot of dynamics to the game. Since the game has a lot of light sources, only the lights closest to the player will cast shadows, and the shadow will be lighter the further away it is from the light source. That should avoid too hard effects with shadows suddenly popping in and out of the scene.

BionicBytes

04-18-2011, 08:46 AM

So your shadow swimming has been fixed?

You are able to use Shadow2DProj if you so wish?

Can you post your final code to:

1. Produce the shadow matrix texture coords

2. GLSL shader to reconstruct EYE space from scene depth texture

3. GLSL shader to perform shadow map lookup/comparison using eye, NDC or clip space - please advise which space it's working in.

What do you mean by 180 degree FOV? You wouldn't put that into gluPerspective would you - just seems a rather large value?

You are able to use Shadow2DProj if you so wish?

Can you post your final code to:

1. Produce the shadow matrix texture coords

2. GLSL shader to reconstruct EYE space from scene depth texture

3. GLSL shader to perform shadow map lookup/comparison using eye, NDC or clip space - please advise which space it's working in.

What do you mean by 180 degree FOV? You wouldn't put that into gluPerspective would you - just seems a rather large value?

karx11erx

04-18-2011, 09:15 AM

Swimming fixed: Yes

shadow2DProj working: No

You may need to ponder a little on the implementation details of the following code, but it shouldn't be too hard to understand.

OpenGL matrix implementation:

#ifndef _OGLMATRIX_H

#define _OGLMATRIX_H

#include <string.h>

#include "glew.h"

class COGLMatrix {

private:

double m_data [16];

GLfloat m_dataf [16];

public:

inline COGLMatrix& operator= (const COGLMatrix& other) {

memcpy (m_data, other.m_data, sizeof (m_data));

return *this;

}

inline COGLMatrix& operator= (const double other [16]) {

memcpy (m_data, other, sizeof (m_data));

return *this;

}

COGLMatrix Inverse (void);

COGLMatrix& Get (GLuint nMatrix, double bInverse = false) {

glGetDoublev (nMatrix, (GLdouble*) m_data);

if (bInverse)

*this = Inverse ();

return *this;

}

void Set (void) { glLoadMatrixd ((GLdouble*) m_data); }

void Mul (void) { glMultMatrixd ((GLdouble*) m_data); }

double& operator[] (int i) { return m_data [i]; }

GLfloat* ToFloat (void) {

for (int i = 0; i < 16; i++)

m_dataf [i] = GLfloat (m_data [i]);

return m_dataf;

}

COGLMatrix& operator* (double factor) {

for (int i = 0; i < 16; i++)

m_data [i] *= factor;

return *this;

}

double Det (COGLMatrix& other) { return m_data [0] * other [0] + m_data [1] * other [4] + m_data [2] * other [8] + m_data [3] * other [12]; }

};

#endif //_OGLMATRIX_H

Inverse function:

COGLMatrix COGLMatrix::Inverse (void)

{

COGLMatrix im;

im [0] = m_data [5] * m_data [10] * m_data [15] - m_data [5] * m_data [11] * m_data [14] - m_data [9] * m_data [6] * m_data [15] + m_data [9] * m_data [7] * m_data [14] + m_data [13] * m_data [6] * m_data [11] - m_data [13] * m_data [7] * m_data [10];

im [4] = -m_data [4] * m_data [10] * m_data [15] + m_data [4] * m_data [11] * m_data [14] + m_data [8] * m_data [6] * m_data [15] - m_data [8] * m_data [7] * m_data [14] - m_data [12] * m_data [6] * m_data [11] + m_data [12] * m_data [7] * m_data [10];

im [8] = m_data [4] * m_data [9] * m_data [15] - m_data [4] * m_data [11] * m_data [13] - m_data [8] * m_data [5] * m_data [15] + m_data [8] * m_data [7] * m_data [13] + m_data [12] * m_data [5] * m_data [11] - m_data [12] * m_data [7] * m_data [9];

im [12] = -m_data [4] * m_data [9] * m_data [14] + m_data [4] * m_data [10] * m_data [13] + m_data [8] * m_data [5] * m_data [14] - m_data [8] * m_data [6] * m_data [13] - m_data [12] * m_data [5] * m_data [10] + m_data [12] * m_data [6] * m_data [9];

im [1] = -m_data [1] * m_data [10] * m_data [15] + m_data [1] * m_data [11] * m_data [14] + m_data [9] * m_data [2] * m_data [15] - m_data [9] * m_data [3] * m_data [14] - m_data [13] * m_data [2] * m_data [11] + m_data [13] * m_data [3] * m_data [10];

im [5] = m_data [0] * m_data [10] * m_data [15] - m_data [0] * m_data [11] * m_data [14] - m_data [8] * m_data [2] * m_data [15] + m_data [8] * m_data [3] * m_data [14] + m_data [12] * m_data [2] * m_data [11] - m_data [12] * m_data [3] * m_data [10];

im [9] = -m_data [0] * m_data [9] * m_data [15] + m_data [0] * m_data [11] * m_data [13] + m_data [8] * m_data [1] * m_data [15] - m_data [8] * m_data [3] * m_data [13] - m_data [12] * m_data [1] * m_data [11] + m_data [12] * m_data [3] * m_data [9];

im [13] = m_data [0] * m_data [9] * m_data [14] - m_data [0] * m_data [10] * m_data [13] - m_data [8] * m_data [1] * m_data [14] + m_data [8] * m_data [2] * m_data [13] + m_data [12] * m_data [1] * m_data [10] - m_data [12] * m_data [2] * m_data [9];

im [2] = m_data [1] * m_data [6] * m_data [15] - m_data [1] * m_data [7] * m_data [14] - m_data [5] * m_data [2] * m_data [15] + m_data [5] * m_data [3] * m_data [14] + m_data [13] * m_data [2] * m_data [7] - m_data [13] * m_data [3] * m_data [6];

im [6] = -m_data [0] * m_data [6] * m_data [15] + m_data [0] * m_data [7] * m_data [14] + m_data [4] * m_data [2] * m_data [15] - m_data [4] * m_data [3] * m_data [14] - m_data [12] * m_data [2] * m_data [7] + m_data [12] * m_data [3] * m_data [6];

im [10] = m_data [0] * m_data [5] * m_data [15] - m_data [0] * m_data [7] * m_data [13] - m_data [4] * m_data [1] * m_data [15] + m_data [4] * m_data [3] * m_data [13] + m_data [12] * m_data [1] * m_data [7] - m_data [12] * m_data [3] * m_data [5];

im [14] = -m_data [0] * m_data [5] * m_data [14] + m_data [0] * m_data [6] * m_data [13] + m_data [4] * m_data [1] * m_data [14] - m_data [4] * m_data [2] * m_data [13] - m_data [12] * m_data [1] * m_data [6] + m_data [12] * m_data [2] * m_data [5];

im [3] = -m_data [1] * m_data [6] * m_data [11] + m_data [1] * m_data [7] * m_data [10] + m_data [5] * m_data [2] * m_data [11] - m_data [5] * m_data [3] * m_data [10] - m_data [9] * m_data [2] * m_data [7] + m_data [9] * m_data [3] * m_data [6];

im [7] = m_data [0] * m_data [6] * m_data [11] - m_data [0] * m_data [7] * m_data [10] - m_data [4] * m_data [2] * m_data [11] + m_data [4] * m_data [3] * m_data [10] + m_data [8] * m_data [2] * m_data [7] - m_data [8] * m_data [3] * m_data [6];

im [11] = -m_data [0] * m_data [5] * m_data [11] + m_data [0] * m_data [7] * m_data [9] + m_data [4] * m_data [1] * m_data [11] - m_data [4] * m_data [3] * m_data [9] - m_data [8] * m_data [1] * m_data [7] + m_data [8] * m_data [3] * m_data [5];

im [15] = m_data [0] * m_data [5] * m_data [10] - m_data [0] * m_data [6] * m_data [9] - m_data [4] * m_data [1] * m_data [10] + m_data [4] * m_data [2] * m_data [9] + m_data [8] * m_data [1] * m_data [6] - m_data [8] * m_data [2] * m_data [5];

double det = Det (im);

if (det == 0.0)

return *this;

det = 1.0 / det;

for (int i = 0; i < 16; i++)

im [i] *= det;

return im;

}

Shadow matrix texture coords:

// The following code is called after modelview and projection matrices have been stuffed with the proper values. modelView and projection are instances of a simple class to handle OpenGL matrices in the application. Not using bias - this is handled by the shader.

static void ComputeShadowTransformation (int nLight)

{

modelView.Get (GL_MODELVIEW_MATRIX); // load the modelview matrix

modelView.Get (GL_PROJECTION_MATRIX); // load the projection matrix

glActiveTexture (GL_TEXTURE1 + nLight);

projection.Set ();

modelview.Mul ();

lightManager.ShadowTransformation (nLight).Get (GL_TEXTURE_MATRIX);

}

Compute inverse modelview * inverse projection. Inverse code from MESA source code.

ogl.SetupTransform ();

lightManager.ShadowTransformation (-1).Get (GL_MODELVIEW_MATRIX, true); // inverse

lightManager.ShadowTransformation (-2).Get (GL_PROJECTION_MATRIX, true);

ogl.ResetTransform ();

glPushMatrix ();

lightManager.ShadowTransformation (-1).Set ();

lightManager.ShadowTransformation (-2).Mul ();

lightManager.ShadowTransformation (-3).Get (GL_MODELVIEW_MATRIX, false); // inverse (modelview * projection)

glPopMatrix ();

Fragment shader. The shader also makes the shadow lighter depending on distance of geometry to light. The shader does what the bias matrix would do. I can't make it work otherwise.

uniform sampler2D sceneColor;

uniform sampler2D sceneDepth;

uniform sampler2D shadowMap;

uniform mat4 modelviewProjInverse;

uniform vec3 lightPos;

uniform float lightRange;

#define ZNEAR 1.0

#define ZFAR 5000.0

#define A 5001.0 //(ZNEAR + ZFAR)

#define B 4999.0 //(ZNEAR - ZFAR)

#define C 10000.0 //(2.0 * ZNEAR * ZFAR)

#define D (cameraNDC.z * B)

#define ZEYE -10000.0 / (5001.0 + cameraNDC.z * 4999.0) //-(C / (A + D))

void main()

{

float fragDepth = texture2D (sceneDepth, gl_TexCoord [0].xy).r;

vec3 cameraNDC = (vec3 (gl_TexCoord [0].xy, fragDepth) - 0.5) * 2.0;

vec4 cameraClipPos;

cameraClipPos.w = -ZEYE;

cameraClipPos.xyz = cameraNDC * cameraClipPos.w;

vec4 lightClipPos = gl_TextureMatrix [2] * cameraClipPos;

float w = abs (lightClipPos.w);

// avoid divides by too small w and clip the shadow texture access to avoid artifacts

float shadowDepth =

((w < 0.00001) || (abs (lightClipPos.x) > w) || (abs (lightClipPos.y) > w))

? 2.0

: texture2D (shadowMap, lightClipPos.xy / (lightClipPos.w * 2.0) + 0.5).r;

float light = 1.0;

if (lightClipPos.z >= (lightClipPos.w * 2.0) * (shadowDepth - 0.5)) {

vec4 worldPos = modelviewProjInverse * cameraClipPos;

float lightDist = length (lightPos - worldPos.xyz);

light = sqrt (min (lightDist, lightRange) / lightRange);

}

gl_FragColor = vec4 (texture2D (sceneColor, gl_TexCoord [0].xy).rgb * light, 1.0);

}\r\n";

I wanted to render the shadow map with a 180 deg FOV to have it cover the actual half sphere illuminated by the corresponding light source. Didn't work though (that would just have been to easy - heh!).

shadow2DProj working: No

You may need to ponder a little on the implementation details of the following code, but it shouldn't be too hard to understand.

OpenGL matrix implementation:

#ifndef _OGLMATRIX_H

#define _OGLMATRIX_H

#include <string.h>

#include "glew.h"

class COGLMatrix {

private:

double m_data [16];

GLfloat m_dataf [16];

public:

inline COGLMatrix& operator= (const COGLMatrix& other) {

memcpy (m_data, other.m_data, sizeof (m_data));

return *this;

}

inline COGLMatrix& operator= (const double other [16]) {

memcpy (m_data, other, sizeof (m_data));

return *this;

}

COGLMatrix Inverse (void);

COGLMatrix& Get (GLuint nMatrix, double bInverse = false) {

glGetDoublev (nMatrix, (GLdouble*) m_data);

if (bInverse)

*this = Inverse ();

return *this;

}

void Set (void) { glLoadMatrixd ((GLdouble*) m_data); }

void Mul (void) { glMultMatrixd ((GLdouble*) m_data); }

double& operator[] (int i) { return m_data [i]; }

GLfloat* ToFloat (void) {

for (int i = 0; i < 16; i++)

m_dataf [i] = GLfloat (m_data [i]);

return m_dataf;

}

COGLMatrix& operator* (double factor) {

for (int i = 0; i < 16; i++)

m_data [i] *= factor;

return *this;

}

double Det (COGLMatrix& other) { return m_data [0] * other [0] + m_data [1] * other [4] + m_data [2] * other [8] + m_data [3] * other [12]; }

};

#endif //_OGLMATRIX_H

Inverse function:

COGLMatrix COGLMatrix::Inverse (void)

{

COGLMatrix im;

im [0] = m_data [5] * m_data [10] * m_data [15] - m_data [5] * m_data [11] * m_data [14] - m_data [9] * m_data [6] * m_data [15] + m_data [9] * m_data [7] * m_data [14] + m_data [13] * m_data [6] * m_data [11] - m_data [13] * m_data [7] * m_data [10];

im [4] = -m_data [4] * m_data [10] * m_data [15] + m_data [4] * m_data [11] * m_data [14] + m_data [8] * m_data [6] * m_data [15] - m_data [8] * m_data [7] * m_data [14] - m_data [12] * m_data [6] * m_data [11] + m_data [12] * m_data [7] * m_data [10];

im [8] = m_data [4] * m_data [9] * m_data [15] - m_data [4] * m_data [11] * m_data [13] - m_data [8] * m_data [5] * m_data [15] + m_data [8] * m_data [7] * m_data [13] + m_data [12] * m_data [5] * m_data [11] - m_data [12] * m_data [7] * m_data [9];

im [12] = -m_data [4] * m_data [9] * m_data [14] + m_data [4] * m_data [10] * m_data [13] + m_data [8] * m_data [5] * m_data [14] - m_data [8] * m_data [6] * m_data [13] - m_data [12] * m_data [5] * m_data [10] + m_data [12] * m_data [6] * m_data [9];

im [1] = -m_data [1] * m_data [10] * m_data [15] + m_data [1] * m_data [11] * m_data [14] + m_data [9] * m_data [2] * m_data [15] - m_data [9] * m_data [3] * m_data [14] - m_data [13] * m_data [2] * m_data [11] + m_data [13] * m_data [3] * m_data [10];

im [5] = m_data [0] * m_data [10] * m_data [15] - m_data [0] * m_data [11] * m_data [14] - m_data [8] * m_data [2] * m_data [15] + m_data [8] * m_data [3] * m_data [14] + m_data [12] * m_data [2] * m_data [11] - m_data [12] * m_data [3] * m_data [10];

im [9] = -m_data [0] * m_data [9] * m_data [15] + m_data [0] * m_data [11] * m_data [13] + m_data [8] * m_data [1] * m_data [15] - m_data [8] * m_data [3] * m_data [13] - m_data [12] * m_data [1] * m_data [11] + m_data [12] * m_data [3] * m_data [9];

im [13] = m_data [0] * m_data [9] * m_data [14] - m_data [0] * m_data [10] * m_data [13] - m_data [8] * m_data [1] * m_data [14] + m_data [8] * m_data [2] * m_data [13] + m_data [12] * m_data [1] * m_data [10] - m_data [12] * m_data [2] * m_data [9];

im [2] = m_data [1] * m_data [6] * m_data [15] - m_data [1] * m_data [7] * m_data [14] - m_data [5] * m_data [2] * m_data [15] + m_data [5] * m_data [3] * m_data [14] + m_data [13] * m_data [2] * m_data [7] - m_data [13] * m_data [3] * m_data [6];

im [6] = -m_data [0] * m_data [6] * m_data [15] + m_data [0] * m_data [7] * m_data [14] + m_data [4] * m_data [2] * m_data [15] - m_data [4] * m_data [3] * m_data [14] - m_data [12] * m_data [2] * m_data [7] + m_data [12] * m_data [3] * m_data [6];

im [10] = m_data [0] * m_data [5] * m_data [15] - m_data [0] * m_data [7] * m_data [13] - m_data [4] * m_data [1] * m_data [15] + m_data [4] * m_data [3] * m_data [13] + m_data [12] * m_data [1] * m_data [7] - m_data [12] * m_data [3] * m_data [5];

im [14] = -m_data [0] * m_data [5] * m_data [14] + m_data [0] * m_data [6] * m_data [13] + m_data [4] * m_data [1] * m_data [14] - m_data [4] * m_data [2] * m_data [13] - m_data [12] * m_data [1] * m_data [6] + m_data [12] * m_data [2] * m_data [5];

im [3] = -m_data [1] * m_data [6] * m_data [11] + m_data [1] * m_data [7] * m_data [10] + m_data [5] * m_data [2] * m_data [11] - m_data [5] * m_data [3] * m_data [10] - m_data [9] * m_data [2] * m_data [7] + m_data [9] * m_data [3] * m_data [6];

im [7] = m_data [0] * m_data [6] * m_data [11] - m_data [0] * m_data [7] * m_data [10] - m_data [4] * m_data [2] * m_data [11] + m_data [4] * m_data [3] * m_data [10] + m_data [8] * m_data [2] * m_data [7] - m_data [8] * m_data [3] * m_data [6];

im [11] = -m_data [0] * m_data [5] * m_data [11] + m_data [0] * m_data [7] * m_data [9] + m_data [4] * m_data [1] * m_data [11] - m_data [4] * m_data [3] * m_data [9] - m_data [8] * m_data [1] * m_data [7] + m_data [8] * m_data [3] * m_data [5];

im [15] = m_data [0] * m_data [5] * m_data [10] - m_data [0] * m_data [6] * m_data [9] - m_data [4] * m_data [1] * m_data [10] + m_data [4] * m_data [2] * m_data [9] + m_data [8] * m_data [1] * m_data [6] - m_data [8] * m_data [2] * m_data [5];

double det = Det (im);

if (det == 0.0)

return *this;

det = 1.0 / det;

for (int i = 0; i < 16; i++)

im [i] *= det;

return im;

}

Shadow matrix texture coords:

// The following code is called after modelview and projection matrices have been stuffed with the proper values. modelView and projection are instances of a simple class to handle OpenGL matrices in the application. Not using bias - this is handled by the shader.

static void ComputeShadowTransformation (int nLight)

{

modelView.Get (GL_MODELVIEW_MATRIX); // load the modelview matrix

modelView.Get (GL_PROJECTION_MATRIX); // load the projection matrix

glActiveTexture (GL_TEXTURE1 + nLight);

projection.Set ();

modelview.Mul ();

lightManager.ShadowTransformation (nLight).Get (GL_TEXTURE_MATRIX);

}

Compute inverse modelview * inverse projection. Inverse code from MESA source code.

ogl.SetupTransform ();

lightManager.ShadowTransformation (-1).Get (GL_MODELVIEW_MATRIX, true); // inverse

lightManager.ShadowTransformation (-2).Get (GL_PROJECTION_MATRIX, true);

ogl.ResetTransform ();

glPushMatrix ();

lightManager.ShadowTransformation (-1).Set ();

lightManager.ShadowTransformation (-2).Mul ();

lightManager.ShadowTransformation (-3).Get (GL_MODELVIEW_MATRIX, false); // inverse (modelview * projection)

glPopMatrix ();

Fragment shader. The shader also makes the shadow lighter depending on distance of geometry to light. The shader does what the bias matrix would do. I can't make it work otherwise.

uniform sampler2D sceneColor;

uniform sampler2D sceneDepth;

uniform sampler2D shadowMap;

uniform mat4 modelviewProjInverse;

uniform vec3 lightPos;

uniform float lightRange;

#define ZNEAR 1.0

#define ZFAR 5000.0

#define A 5001.0 //(ZNEAR + ZFAR)

#define B 4999.0 //(ZNEAR - ZFAR)

#define C 10000.0 //(2.0 * ZNEAR * ZFAR)

#define D (cameraNDC.z * B)

#define ZEYE -10000.0 / (5001.0 + cameraNDC.z * 4999.0) //-(C / (A + D))

void main()

{

float fragDepth = texture2D (sceneDepth, gl_TexCoord [0].xy).r;

vec3 cameraNDC = (vec3 (gl_TexCoord [0].xy, fragDepth) - 0.5) * 2.0;

vec4 cameraClipPos;

cameraClipPos.w = -ZEYE;

cameraClipPos.xyz = cameraNDC * cameraClipPos.w;

vec4 lightClipPos = gl_TextureMatrix [2] * cameraClipPos;

float w = abs (lightClipPos.w);

// avoid divides by too small w and clip the shadow texture access to avoid artifacts

float shadowDepth =

((w < 0.00001) || (abs (lightClipPos.x) > w) || (abs (lightClipPos.y) > w))

? 2.0

: texture2D (shadowMap, lightClipPos.xy / (lightClipPos.w * 2.0) + 0.5).r;

float light = 1.0;

if (lightClipPos.z >= (lightClipPos.w * 2.0) * (shadowDepth - 0.5)) {

vec4 worldPos = modelviewProjInverse * cameraClipPos;

float lightDist = length (lightPos - worldPos.xyz);

light = sqrt (min (lightDist, lightRange) / lightRange);

}

gl_FragColor = vec4 (texture2D (sceneColor, gl_TexCoord [0].xy).rgb * light, 1.0);

}\r\n";

I wanted to render the shadow map with a 180 deg FOV to have it cover the actual half sphere illuminated by the corresponding light source. Didn't work though (that would just have been to easy - heh!).

Dark Photon

04-19-2011, 04:58 AM

I think lightWinPos should have xy scaled with the actual viewport dimensions. So my ndcPOS is somewhere between true NDC and window coordinates, isn't it?

Don't think so. Your light window coordinates are 0..1, 0..1, and you've got the *0.5+0.5 in there to take your NDC coords to that space.

shadow2DProj compares against the depth from the frame buffer, doesn't it? And that doesn't work for me.

By itself? No (not AFAIK). But in GLSL 1.2 and earlier, that's one required piece out of four to do hardware depth comparisons.

In GLSL 1.2 and earlier, you have to do all of these things:

use a shadow2D* texture access function (such as shadow2DProj) to sample the depth texture, use a sampler2DShadow sampler in the shader for the depth texture, bind a depth texture to it in your app, and set the depth compare attributes on the depth texture before invoking your shader.

texture2DProj merely does the texcoord.xyz/.w step and can be used totally independently of depth textures and depth compare. While shadow2DProj does that plus implies doing a depth compare too (with the extra .z texcoord component you passed in), assuming you've done all the other things above.

In GLSL 1.3 and later, they realized that it was pointless to have the explosion in texture sampling function names based on texture type, shadow/non-shadow, etc. so #1 in the above list simplifies in GLSL 1.3+ to just calling the "texture" texture sampling functions (e.g. texture, textureProj, etc. -- all overloaded by sampler type).

As an example of how to set up depth compare on the texture:

glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER , GL_NEAREST );

glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER , GL_NEAREST );

glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S , GL_CLAMP_TO_EDGE );

glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T , GL_CLAMP_TO_EDGE );

glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_COMPARE_R_TO_TEXTURE );

glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC, GL_LEQUAL );

glTexParameteri( GL_TEXTURE_2D, GL_DEPTH_TEXTURE_MODE , GL_INTENSITY );

Don't think so. Your light window coordinates are 0..1, 0..1, and you've got the *0.5+0.5 in there to take your NDC coords to that space.

shadow2DProj compares against the depth from the frame buffer, doesn't it? And that doesn't work for me.

By itself? No (not AFAIK). But in GLSL 1.2 and earlier, that's one required piece out of four to do hardware depth comparisons.

In GLSL 1.2 and earlier, you have to do all of these things:

use a shadow2D* texture access function (such as shadow2DProj) to sample the depth texture, use a sampler2DShadow sampler in the shader for the depth texture, bind a depth texture to it in your app, and set the depth compare attributes on the depth texture before invoking your shader.

texture2DProj merely does the texcoord.xyz/.w step and can be used totally independently of depth textures and depth compare. While shadow2DProj does that plus implies doing a depth compare too (with the extra .z texcoord component you passed in), assuming you've done all the other things above.

In GLSL 1.3 and later, they realized that it was pointless to have the explosion in texture sampling function names based on texture type, shadow/non-shadow, etc. so #1 in the above list simplifies in GLSL 1.3+ to just calling the "texture" texture sampling functions (e.g. texture, textureProj, etc. -- all overloaded by sampler type).

As an example of how to set up depth compare on the texture:

glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER , GL_NEAREST );

glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER , GL_NEAREST );

glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S , GL_CLAMP_TO_EDGE );

glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T , GL_CLAMP_TO_EDGE );

glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_COMPARE_R_TO_TEXTURE );

glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC, GL_LEQUAL );

glTexParameteri( GL_TEXTURE_2D, GL_DEPTH_TEXTURE_MODE , GL_INTENSITY );

karx11erx

04-19-2011, 07:18 AM

By itself? No (not AFAIK). But in GLSL 1.2 and earlier, that's one required piece out of four to do hardware depth comparisons.

In GLSL 1.2 and earlier, you have to do all of these things:

use a shadow2D* texture access function (such as shadow2DProj) to sample the depth texture, use a sampler2DShadow sampler in the shader for the depth texture, bind a depth texture to it in your app, and set the depth compare attributes on the depth texture before invoking your shader.

texture2DProj merely does the texcoord.xyz/.w step and can be used totally independently of depth textures and depth compare. While shadow2DProj does that plus implies doing a depth compare too (with the extra .z texcoord component you passed in), assuming you've done all the other things above.

[...]

As an example of how to set up depth compare on the texture:

glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER , GL_NEAREST );

glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER , GL_NEAREST );

glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S , GL_CLAMP_TO_EDGE );

glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T , GL_CLAMP_TO_EDGE );

glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_COMPARE_R_TO_TEXTURE );

glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC, GL_LEQUAL );

glTexParameteri( GL_TEXTURE_2D, GL_DEPTH_TEXTURE_MODE , GL_INTENSITY );

Thanks. I know that, but my shader doesn't even work when using texture2DProj instead of computing everything manually, and it also doesn't work right when comparing the depth value from the scene with the corresponding depth value from the shadow map. The only thing that works is to compute the shadow map depth value of the related scene fragment and compare that to the shadow map depth value for the corresponding shadow map position stored in the shadow map (in other words: Compute eye position of fragment in scene, compute light window position from that, compare that light window position's depth value with the depth value in the shadow map).

This is certainly due to an oversight or misunderstanding on my side, but I haven't yet figured where or why that had happened.

Anyway, what is texture2DProj good for when it only divides by w and doesn't also do the scaling and translation? After all,

vec3 lightWinPos = lightClipPos.xyz / lightClipPos.w * 2.0 + 0.5;

isn't it? So since I couldn't even get this to work when using texture2DProj, I didn't even bother trying shadow2DProj, since it is just based on texture2DProj and does something on top of it (depth value lookup and comparison). Now of course OpenGL is not the problem here, but rather my limited or wrong understanding of this function, so some enlightenment would be more than welcome. :)

Another question: Does multiplication with the bias matrix just do the scaling and translation, or also the w divide?

In GLSL 1.2 and earlier, you have to do all of these things:

use a shadow2D* texture access function (such as shadow2DProj) to sample the depth texture, use a sampler2DShadow sampler in the shader for the depth texture, bind a depth texture to it in your app, and set the depth compare attributes on the depth texture before invoking your shader.

texture2DProj merely does the texcoord.xyz/.w step and can be used totally independently of depth textures and depth compare. While shadow2DProj does that plus implies doing a depth compare too (with the extra .z texcoord component you passed in), assuming you've done all the other things above.

[...]

As an example of how to set up depth compare on the texture:

glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER , GL_NEAREST );

glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER , GL_NEAREST );

glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S , GL_CLAMP_TO_EDGE );

glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T , GL_CLAMP_TO_EDGE );

glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_COMPARE_R_TO_TEXTURE );

glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC, GL_LEQUAL );

glTexParameteri( GL_TEXTURE_2D, GL_DEPTH_TEXTURE_MODE , GL_INTENSITY );

Thanks. I know that, but my shader doesn't even work when using texture2DProj instead of computing everything manually, and it also doesn't work right when comparing the depth value from the scene with the corresponding depth value from the shadow map. The only thing that works is to compute the shadow map depth value of the related scene fragment and compare that to the shadow map depth value for the corresponding shadow map position stored in the shadow map (in other words: Compute eye position of fragment in scene, compute light window position from that, compare that light window position's depth value with the depth value in the shadow map).

This is certainly due to an oversight or misunderstanding on my side, but I haven't yet figured where or why that had happened.

Anyway, what is texture2DProj good for when it only divides by w and doesn't also do the scaling and translation? After all,

vec3 lightWinPos = lightClipPos.xyz / lightClipPos.w * 2.0 + 0.5;

isn't it? So since I couldn't even get this to work when using texture2DProj, I didn't even bother trying shadow2DProj, since it is just based on texture2DProj and does something on top of it (depth value lookup and comparison). Now of course OpenGL is not the problem here, but rather my limited or wrong understanding of this function, so some enlightenment would be more than welcome. :)

Another question: Does multiplication with the bias matrix just do the scaling and translation, or also the w divide?

Dark Photon

04-19-2011, 02:22 PM

Anyway, what is texture2DProj good for when it only divides by w and doesn't also do the scaling and translation? After all,

vec3 lightWinPos = lightClipPos.xyz / lightClipPos.w * 2.0 + 0.5;

isn't it?

Well, it's actually *0.5+0.5.

And if you slip that *0.5+0.5 in your shadow matrix (as you did -- the "bias" matrix), then you can effectively do it first and defer the perspective divide until the very last operation before texture sampling (either doing it yourself or letting texture2DProj/shadow2DProj do it for you). This is common.

vec3 lightWinPos = lightClipPos.xyz / lightClipPos.w * 2.0 + 0.5;

isn't it?

Well, it's actually *0.5+0.5.

And if you slip that *0.5+0.5 in your shadow matrix (as you did -- the "bias" matrix), then you can effectively do it first and defer the perspective divide until the very last operation before texture sampling (either doing it yourself or letting texture2DProj/shadow2DProj do it for you). This is common.

karx11erx

04-19-2011, 02:33 PM

Ack, that's because I forgot the brackets around (w * 2.0). They are present in my code though.

vec3 lightWinPos = lightClipPos.xyz / (lightClipPos.w * 2.0) + 0.5;

vec3 lightWinPos = lightClipPos.xyz / (lightClipPos.w * 2.0) + 0.5;

Dark Photon

04-19-2011, 02:38 PM

Another question: Does multiplication with the bias matrix just do the scaling and translation, or also the w divide?

Just the scale and translation. Effectively, it takes you from 4D clip space (in-frustum is -w <= x,y,z <= w) to 4D window space (in-frustum is 0 <= x',y',z' <= w').

If you look at the bias matrix you can convince yourself that it does exactly that. What's it does to X for example is: x' = x/2 + w/2. Right? And how do we transform the above equality from -w <= x <= w (CLIP SPACE) to 0 <= x <= w (WINDOW SPACE). First, we divide by 2, yielding -w/2 <= x/2 <= w/2. Then we add w/2, yielding 0 <= x/2 + w/2 <= w.

So x' = x/2 + w/2 and w' = w.

Just the scale and translation. Effectively, it takes you from 4D clip space (in-frustum is -w <= x,y,z <= w) to 4D window space (in-frustum is 0 <= x',y',z' <= w').

If you look at the bias matrix you can convince yourself that it does exactly that. What's it does to X for example is: x' = x/2 + w/2. Right? And how do we transform the above equality from -w <= x <= w (CLIP SPACE) to 0 <= x <= w (WINDOW SPACE). First, we divide by 2, yielding -w/2 <= x/2 <= w/2. Then we add w/2, yielding 0 <= x/2 + w/2 <= w.

So x' = x/2 + w/2 and w' = w.

karx11erx

04-19-2011, 04:11 PM

Thanks. Knowing all that I have been able to make the shader work using shadow2DProj.

Here's the vertex shader code:

uniform sampler2D sceneColor;

uniform sampler2D sceneDepth;

uniform sampler2DShadow shadowMap;

uniform mat4 modelviewProjInverse;

uniform vec3 lightPos;

uniform float lightRange;

#define ZNEAR 1.0

#define ZFAR 5000.0

#define A 5001.0 //(ZNEAR + ZFAR)

#define B 4999.0 //(ZNEAR - ZFAR)

#define C 10000.0 //(2.0 * ZNEAR * ZFAR)

#define D (cameraNDC.z * B)

#define ZEYE -10000.0 / (5001.0 + cameraNDC.z * 4999.0) //-(C / (A + D))

void main()

{

float fragDepth = texture2D (sceneDepth, gl_TexCoord [0].xy).r;

vec3 cameraNDC = (vec3 (gl_TexCoord [0].xy, fragDepth) - 0.5) * 2.0;

vec4 cameraClipPos;

cameraClipPos.w = -ZEYE;

cameraClipPos.xyz = cameraNDC * cameraClipPos.w;

vec4 lightWinPos = gl_TextureMatrix [2] * cameraClipPos;

float w = abs (lightWinPos.w);

int lit = ((w < 0.00001) || (abs (lightWinPos.x) > w) || (abs (lightWinPos.y) > w)) ? 1.0 : shadow2DProj (shadowMap, lightWinPos).r;

float light;

if (lit == 1)

light = 1.0;

else {

vec4 worldPos = modelviewProjInverse * cameraClipPos;

float lightDist = length (lightPos - worldPos.xyz);

light = sqrt (min (lightDist, lightRange) / lightRange);

}

gl_FragColor = vec4 (texture2D (sceneColor, gl_TexCoord [0].xy).rgb * light, 1.0);

}

I hope everybody is satisfied with my choice of variable names. :D

Here's the vertex shader code:

uniform sampler2D sceneColor;

uniform sampler2D sceneDepth;

uniform sampler2DShadow shadowMap;

uniform mat4 modelviewProjInverse;

uniform vec3 lightPos;

uniform float lightRange;

#define ZNEAR 1.0

#define ZFAR 5000.0

#define A 5001.0 //(ZNEAR + ZFAR)

#define B 4999.0 //(ZNEAR - ZFAR)

#define C 10000.0 //(2.0 * ZNEAR * ZFAR)

#define D (cameraNDC.z * B)

#define ZEYE -10000.0 / (5001.0 + cameraNDC.z * 4999.0) //-(C / (A + D))

void main()

{

float fragDepth = texture2D (sceneDepth, gl_TexCoord [0].xy).r;

vec3 cameraNDC = (vec3 (gl_TexCoord [0].xy, fragDepth) - 0.5) * 2.0;

vec4 cameraClipPos;

cameraClipPos.w = -ZEYE;

cameraClipPos.xyz = cameraNDC * cameraClipPos.w;

vec4 lightWinPos = gl_TextureMatrix [2] * cameraClipPos;

float w = abs (lightWinPos.w);

int lit = ((w < 0.00001) || (abs (lightWinPos.x) > w) || (abs (lightWinPos.y) > w)) ? 1.0 : shadow2DProj (shadowMap, lightWinPos).r;

float light;

if (lit == 1)

light = 1.0;

else {

vec4 worldPos = modelviewProjInverse * cameraClipPos;

float lightDist = length (lightPos - worldPos.xyz);

light = sqrt (min (lightDist, lightRange) / lightRange);

}

gl_FragColor = vec4 (texture2D (sceneColor, gl_TexCoord [0].xy).rgb * light, 1.0);

}

I hope everybody is satisfied with my choice of variable names. :D

Dark Photon

04-19-2011, 06:36 PM

Thanks. Knowing all that I have been able to make the shader work using shadow2DProj.

Excellent! Congrats!

Excellent! Congrats!

karx11erx

04-20-2011, 01:41 AM

Thanks to everybody who has helped me with understanding this. :)

Powered by vBulletin® Version 4.2.5 Copyright © 2018 vBulletin Solutions Inc. All rights reserved.