A few issues with Shadow mapping & Deferred Shading

My goal, quoted perfectly on stack overflow as follows:

An user tried to shadow map with an deferred renderer, calculated the shadow cords as if it was an forward renderer, and got the reply:

The process is actually the opposite. You don’t convert texture coordinates to the clip space but you convert coordinates from the clip space to the texture. For this to work you need to pass the light camera and projection to the fragment shader (and pass the position from vertex shader to fragment shader at least in OpenGLES 2.0, don’t know about OpenGL 3.3).

Multiply position by camera and projection and you’ll get the position in the light’s view.
Divide xyz by w and you’ll get the position in light view’s clip space.
Multiply x and y by 0.5 and add 0.5 and you’ll have the uv coordinates of the shadow map.
Now you can read the depth from the shadow map at uv and compare it to your pixel’s depth.

Since I am still a novice OpenGL programmer, I found this tutorial that did exactly that, but with the caveat that it uses deprecated GL functionality.
(http://www.codinglabs.net/tutorial_opengl_deferred_rendering_shadow_mapping.aspx)
I tried my best to convert them, shadow mapping works but are not projected into the world correctly.

This was my assumptions, though this could be wrong (hopefully explain my current issues)

The tutorial refers to GL_MODELVIEW_MATRIX, GL_PROJECTION_MATRIX for camera and light identity.

I am unsure about the conversion, but I use Model View Projection with GLM and did the following:

The author sets 3 identities, as uniform variables he compute the shadow cord

worldToLightViewMatrix = world light view
lightViewToProjectionMatrix = light view projection
worldToCameraViewMatrix = world camera view

In my case, I send in light view (glm::mat4)
light projection (glm::mat4)
Camera view (glm::mat4)

I have confirmed that the shadow map is generated from the lights PVM, position, projection etc looks correct when I draw from the texture. Also, even though the shadows jumps about when i move the camera, I can see the expected shape in the shadow.

I have confirmed that the camera (my viewport) is correct.

What I am unsure of is what the depricated GL_MODELVIEW returns in glGetFloatv as compared to my expected matrices.

Some highlights of my implementation:

struct Shadow 
{
	sampler2D shadowMap;
	mat4 view;                   // View of light
	mat4 cameraView;         // View of camera
	mat4 projection;            // Projection of light
};
uniform Shadow shadow;

returns 0 or 1 (0 = shadow) (bold = Values I had to convert on the C++ side (setting uniforms))
float readShadowMap(vec3 eyeDir)

float readShadowMap(vec3 eyeDir)
{
mat4 cameraViewToWorldMatrix = inverse(shadow.cameraView);
mat4 cameraViewToProjectedLightSpace = shadow.projection * shadow.view * cameraViewToWorldMatrix;
vec4 projectedEyeDir = cameraViewToProjectedLightSpace * vec4(eyeDir,1);
projectedEyeDir = projectedEyeDir/projectedEyeDir.w;

vec2 textureCoordinates = projectedEyeDir.xy * vec2(0.5,0.5) + vec2(0.5,0.5);

const float bias = 0.0001;
float depthValue = texture2D( shadow.shadowMap, textureCoordinates ) - bias;
if ((projectedEyeDir.z * 0.5 + 0.5) < depthValue) {
return 1.0;
}
return 0.0;
}

My version

       
uniform vec3 gEyeWorldPos;

vec3 position = vec3( texture( PositionTex, texCoord ) );
vec3 vte = position - gEyeWorldPos;	
float shadow = readShadowMap(vte);

Original version

uniform vec3 cameraPosition;


vec4 position = texture2D( tPosition, gl_TexCoord[0].xy );
vec3 eyeDir = position.xyz - cameraPosition;
float shadow = readShadowMap(eyeDir);

Any input, suggestions would be helpful.

With shadow mapping, you should have two model-view-projection matrices (if you have a separate model-view matrix and projection matrix, just concatenate them). One is the camera transformation, the other is the light transformation (i.e. the transformation which was used when rendering the shadow map).

The vertex shader should transform the incoming vertex coordinates by each transformation. The result of transforming the vertex by the camera transformation should be stored in gl_Position. The result of transforming the vertex by the light transformation should be converted to a vec3 (by dividing by w) then converted from the -1…+1 range used for normalised device coordinates to the 0…1 range used for texture coordinates and depth values, then stored in an output variable.

You can find example shader code in this post.

Thanks for the response. I noticed that I had an typo in my first leading paragraph using the words “forward renderer”, might been misleading (sorry!) (Fixed that)

Unless I am totally mistaken, your point covers forward rendering. I want to compute the shadow cords in the lightning pass (after geometry). The steps you cover are no longer possible (unless I want to store the shadow map in the G-buffer which is a big No No).

Well, you could perform the shadow calculation in the first pass and just store the lit/shadow flag in the G-buffer. That would avoid having to perform a transformation per fragment.

Otherwise, you need to convert the fragment coordinates to camera coordinates, then to light coordinates. E.g.


uniform mat4 cameraToLight;
uniform sampler2D depthBuffer;
uniform sampler2D shadowMap;
uniform vec2 viewportOrigin;
uniform vec2 viewportSize;

void main()
{
    vec2 xy = (gl_FragCoord.xy - viewportOrigin) / viewportSize;  // 0..1
    float z = texture(depthBuffer, xy); // 0..1
    vec3 ndc_cam = 2 * vec3(xy,z) - 1; // normalised device coordinates, camera space
    vec4 clip_cam = vec4(ndc_cam,1); // clip coordinates, camera space
    vec4 clip_light = cameraToLight * clip_cam; // clip coordinates, light space
    vec3 ndc_light = clip_light.xyz / clip_light.w; // normalised device coordinates, light space
    vec3 texco = (ndc_light + 1) / 2; // shadow map texture coordinates (0..1)
    bool shadow = texture(shadowMap, texco.xy) < texco.z;
    ...
}

The cameraToLight matrix should be constructed in the client as

cameraToLight = light_mvp * camera_mvp-1