My goal, quoted perfectly on stack overflow as follows:
An user tried to shadow map with an deferred renderer, calculated the shadow cords as if it was an forward renderer, and got the reply:
The process is actually the opposite. You don’t convert texture coordinates to the clip space but you convert coordinates from the clip space to the texture. For this to work you need to pass the light camera and projection to the fragment shader (and pass the position from vertex shader to fragment shader at least in OpenGLES 2.0, don’t know about OpenGL 3.3).
Multiply position by camera and projection and you’ll get the position in the light’s view.
Divide xyz by w and you’ll get the position in light view’s clip space.
Multiply x and y by 0.5 and add 0.5 and you’ll have the uv coordinates of the shadow map.
Now you can read the depth from the shadow map at uv and compare it to your pixel’s depth.
Since I am still a novice OpenGL programmer, I found this tutorial that did exactly that, but with the caveat that it uses deprecated GL functionality.
(http://www.codinglabs.net/tutorial_opengl_deferred_rendering_shadow_mapping.aspx)
I tried my best to convert them, shadow mapping works but are not projected into the world correctly.
This was my assumptions, though this could be wrong (hopefully explain my current issues)
The tutorial refers to GL_MODELVIEW_MATRIX, GL_PROJECTION_MATRIX for camera and light identity.
I am unsure about the conversion, but I use Model View Projection with GLM and did the following:
The author sets 3 identities, as uniform variables he compute the shadow cord
worldToLightViewMatrix = world light view
lightViewToProjectionMatrix = light view projection
worldToCameraViewMatrix = world camera view
In my case, I send in light view (glm::mat4)
light projection (glm::mat4)
Camera view (glm::mat4)
I have confirmed that the shadow map is generated from the lights PVM, position, projection etc looks correct when I draw from the texture. Also, even though the shadows jumps about when i move the camera, I can see the expected shape in the shadow.
I have confirmed that the camera (my viewport) is correct.
What I am unsure of is what the depricated GL_MODELVIEW returns in glGetFloatv as compared to my expected matrices.
Some highlights of my implementation:
struct Shadow
{
sampler2D shadowMap;
mat4 view; // View of light
mat4 cameraView; // View of camera
mat4 projection; // Projection of light
};
uniform Shadow shadow;
returns 0 or 1 (0 = shadow) (bold = Values I had to convert on the C++ side (setting uniforms))
float readShadowMap(vec3 eyeDir)
float readShadowMap(vec3 eyeDir)
{
mat4 cameraViewToWorldMatrix = inverse(shadow.cameraView);
mat4 cameraViewToProjectedLightSpace = shadow.projection * shadow.view * cameraViewToWorldMatrix;
vec4 projectedEyeDir = cameraViewToProjectedLightSpace * vec4(eyeDir,1);
projectedEyeDir = projectedEyeDir/projectedEyeDir.w;
vec2 textureCoordinates = projectedEyeDir.xy * vec2(0.5,0.5) + vec2(0.5,0.5);
const float bias = 0.0001;
float depthValue = texture2D( shadow.shadowMap, textureCoordinates ) - bias;
if ((projectedEyeDir.z * 0.5 + 0.5) < depthValue) {
return 1.0;
}
return 0.0;
}
My version
uniform vec3 gEyeWorldPos;
vec3 position = vec3( texture( PositionTex, texCoord ) );
vec3 vte = position - gEyeWorldPos;
float shadow = readShadowMap(vte);
Original version
uniform vec3 cameraPosition;
vec4 position = texture2D( tPosition, gl_TexCoord[0].xy );
vec3 eyeDir = position.xyz - cameraPosition;
float shadow = readShadowMap(eyeDir);
Any input, suggestions would be helpful.