Simple Shadow Mapping

Hello!

I have spent last days trying to implement simple shadow mapping. Unfortunately no success so far. I have read many threads and looked at sources. I don’t have any screenshots about the problem because only pure black color is shown but I can show some code.

Shaders:

uniform mat4 texture;
varying vec4 ShadowCoord;
void main() {
	gl_Position = gl_ModelViewMatrix * gl_Vertex;
	ShadowCoord = texture * gl_ModelViewMatrix * gl_Vertex;
	ShadowCoord = ShadowCoord / ShadowCoord.w;
}
uniform sampler2DShadow shadow_buffer;
varying vec4 ShadowCoord;
...
float shadow = shadow2DProj(shadow_buffer, ShadowCoord).r;
gl_FragColor = vec4(vec3(shadow * color), 1.0);

Shadow fbo:

	glActiveTexture(GL_TEXTURE7);
	glEnable(GL_TEXTURE_2D);
	glGenFramebuffers(1, &id);
	glBindFramebuffer(GL_FRAMEBUFFER, id);
	glDrawBuffer(GL_NONE);
	glReadBuffer(GL_NONE);
	glGenTextures(1, &depth);
	glBindTexture(GL_TEXTURE_2D, depth);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_COMPARE_R_TO_TEXTURE);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC, GL_LEQUAL);
	glTexParameteri(GL_TEXTURE_2D, GL_DEPTH_TEXTURE_MODE, GL_INTENSITY);
	glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, resolution.x, resolution.y, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, NULL);
	glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, depth, 0);
	glBindTexture(GL_TEXTURE_2D, 0);

Before rendering from light’s point of view (before()):

	glBindFramebuffer(GL_FRAMEBUFFER, id);
	gluLookAt(5, 5, 5, 4, 1, 4, 0, 1, 0); // tested
	glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE); 
	glEnable(GL_CULL_FACE);
	glCullFace(GL_FRONT);

After rendering from light’s point of view (after():

	GLfloat temp[16] = {	
		0.5, 0.0, 0.0, 0.0, 
		0.0, 0.5, 0.0, 0.0,
		0.0, 0.0, 0.5, 0.0,
	0.5, 0.5, 0.5, 1.0};
	bias = mat4(temp);
	glGetFloatv(GL_PROJECTION_MATRIX, temp);
	projection = mat4(temp);
	glGetFloatv(GL_MODELVIEW_MATRIX, temp);
	modelview = mat4(temp);
  	glBindFramebuffer(GL_FRAMEBUFFER, 0);
	glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE); 
	glActiveTexture(GL_TEXTURE7);
	glBindTexture(GL_TEXTURE_2D, depth);
	glCullFace(GL_BACK);

Texture matrix:

texture = bias * projection * modelview * inv_cam_mv;

Main rendering algorithm:

  • clear scene and reset matrices
  • before()
  • draw stuff from light’s point of view
  • after()
  • clear scene and reset matrices
  • bind fbo with mrt
  • navigate camera
  • get inverse of camera’s modelview matrix, send it to shadow class and multiply with texture matrix (see above)
  • draw stuff from cameras’s point of view
  • unbind fbo with mrt
  • bind lighting pass shader and send all needed data via uniforms (including shadow texture id and texture matrix)
  • unbind shader
  • swap buffers

Any suggestions appreciated :slight_smile:

Just in case: I am using Intel 4500 (OGL 2.1/GLSL 1.2) and deferred rendering without shadows works well.

EDIT: Interesting…

gl_FragColor = vec4(vec3((shadow + 1.0) * color), 1.0);

gives also only black color.

I didn’t clear shadow fbo after binding it. Now I can get some random shadows. There must be something wrong with shadow texture matrix.

When I go next to object then I get shadow that resemble actual object but it moves with camera.

This is what I got at the moment in light’s POV:

glMatrixMode(GL_TEXTURE);
glActiveTexture(GL_TEXTURE7);
glEnable(GL_TEXTURE_2D);
glLoadIdentity();
glLoadMatrixf(bias);
glMultMatrixf(projection);
glMultMatrixf((modelview));
glMatrixMode(GL_MODELVIEW);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE); 
glActiveTexture(GL_TEXTURE7);
glBindTexture(GL_TEXTURE_2D, depth);

After setting camera’s POV:

glMatrixMode(GL_TEXTURE);
glActiveTexture(GL_TEXTURE7);
glMultMatrixf(inv_cam_mv);
glMatrixMode(GL_MODELVIEW);

Shaders are simple like here.

The point of doing a projective lookup (shadow2DProj) is to 1) defer the perspective divide to the frag shader, and 2) do it as part of the texture lookup, in case the GPU offers a performance advantage for doing so.

So you’re doing the perspective divide twice, once in the vertex shader (which is not correct; what if w=0? … ouch … and what’s that do to your interpolation? … double ouch; not to mention this doesn’t result in perspective-correct interpolation), and once in the shadow2DProj (with w=1 “hopefully”, unless we had a divide-by-0 blow-up from the previous).

Suggest you delete the “/ ShadowCoord.w” in the vertex shader. And if you’re casting shadows from a directional light (as opposed to a point light), you also don’t need a projective lookup at all (i.e. just use shadow2D). Why? Because w=1, since you’re using an orthographic light-space projection.

Your naming concerns me a bit. Should be:

texture = bias * light_projection * light_viewing * inverse(camera_viewing)

There should be no object “modeling” transforms in there. I would re-verify that your light_viewing and inverse(camera_viewing) transforms don’t have anything else mixed in. Having one of these wrong could explain what you’re seeing with shadows following the camera. Make sure the product of just those two matrices isn’t the Identity :wink: and looks like it has a rotate+translate in it (presuming your camera and light aren’t aiming the same way).

Also, make sure you’re being consistent with your transform math. If you’re writing column-major transform-on-the-left notation (Mv - e.g. OpenGL/GLSL), then the above is correct. OTOH, if you’re writing row-major transform-on-the-right (vM), then you’d flip the order.

Just in case: I am using Intel 4500 (OGL 2.1/GLSL 1.2) and deferred rendering without shadows works well.

Cool! Just curious: which technique did you go with? Deferred Shading, Deferred Lighting, or Light Indexed Deferred Rendering?

Interesting… “gl_FragColor = vec4(vec3((shadow + 1.0) * color), 1.0);” gives also only black color.

That’s interesting. And what about if you delete “* color” from that?

Thanks for your answers!

Just curious: which technique did you go with? Deferred Shading, Deferred Lighting, or Light Indexed Deferred Rendering?

I go with deferred shading or deferred lighting. Actually I don’t know what’s the difference.

I dont’ have “/ ShadowCoord.w” in shader anymore and product of light_view and inverse_camera_view isn’t identity matrix. It changes when moving around the scene.

Result is same: shadow moves with camera and hides when camera goes farther from object.

My implementation is almost identical to this except shaders (I dont use PCF filter).

varying vec4 ShadowCoord;
void main() {
	gl_Position = gl_ModelViewMatrix * gl_Vertex;
	gl_TexCoord[0] = gl_MultiTexCoord0;
	ShadowCoord = gl_TextureMatrix[7] * gl_Vertex;
}
uniform sampler2D color_buffer;
uniform sampler2DShadow shadow_buffer;
varying vec4 ShadowCoord;
void main() {
	vec3 color = texture2D(color_buffer, gl_TexCoord[0].st).rgb;
	float shadow = shadow2DProj(shadow_buffer, ShadowCoord).r;
	gl_FragColor = vec4(vec3(shadow * color), 1.0);
}

Also he didn’t multiply texture matrix with inverse camera matrix, why? If I comment out that multiplication then I got big diagonal shadow over screen.

Where I have made a mistake? :confused:

Visualized shadowmap taken from light’s point of view:

I get moving shadow like this:

I bind shadowmap with fullscreen quad in lighting step of deferred rendering if it helps.

Well, with Deferred Shading, you smash your surface/material properties in buffer, then go back and apply lighting to them to accumulate per-sample color (radiance).

Whereas with Deferred Lighting, you only smash a subset of your surface/material properties in a buffer, then go off and apply lighting to them to accumulate lighting terms, then redraw your objects again to apply your full materials to the lighting terms to compute per-sample color (radiance).

Nice compare/contrast article on these by Adrian Stone here:

Also he didn’t multiply texture matrix with inverse camera matrix, why?

Because his MODELING matrix is the identity. Essentially, he’s passing down WORLD coordinates to his shader as gl_Vertex. So he can pick right up and apply the light’s VIEWING transform (WORLD SPACE->LIGHT EYE SPACE) without having to back-out to WORLD SPACE first. This is fine for a tiny-world toy demo, but not what you want to do for a real app (where world space can be large).

Well, then you wouldn’t be using gl_Vertex as the input position, as you said you were (as it’d be something pseudo-useless like (0…1, 0…1) – i.e. what you need to rasterize a full-screen quad).

You would instead be reconstructing the fragment position from the G-buffer to get a 3D eye-space position, and then feeding that through your shadow matrix (camera inverse viewing * light’s viewing * light’s projection * bias ) to compute your shadow map texcoord.

Maybe that’s your problem… (?)

Many thanks for your very helpful posts! I did like you wrote and finally got correct shadows with deferred shading.

Excellent! Congrats!