Shadow mapping without hardware support

Hey all.

I’m currently working on trying to implement shadow mapping on my old GF2MX (Personal project, just to see if I can), and I’ve run into some trouble.

I understand how the algorithm works, but I’m having trouble getting the distance from the light to each fragment into a texture that I can do some register combiner trickery on to get it to work right.

I’m trying to use a vertex shader to compute this distance, and I think I know what I’m doing wrong, but I don’t know how to fix it.

In the VS, I subtract the vertex position from the light’s position, then get the distance of that vector, and then figure out where that distance is between the near and far planes of the light’s view frustum, on a linear scale, which then maps to an alpha-ramp texture (because that clamps it to 0, 1). The problem, however, is figuring out where each vertex will actually be.

The modelview matrix contains all the transformations that have been applied to the vertex, such as scales and rotations and translations and such, which should be applied before figuring out the distance from it to the light. But, it ALSO contains the transformations that the CAMERA applies to the modelview matrix, and this should NOT be used when trying to find the world-space location of the vertex.

I’m not really sure how I can separate the camera transformations out of the modelview matrix so that I can find this distance, so I’m asking here.

Any help would be GREATLY appreciated,

  • Dylan Barrie

I maybe understanding you wrong, but couldn’t you move the camera to the point of proejction, so even with it transformed, you have the right coords?

I’ve only ever done shadow mapping using projected textures in software…to do this I placed the camera at the light source, then projected the caster mesh’s verts into eye-space, and then converted than into uv space of the shadow map.
Not sure if that’s of any help though, but definitely works without any fancy hardware.

Originally posted by ManOfSpace:
I maybe understanding you wrong, but couldn’t you move the camera to the point of proejction, so even with it transformed, you have the right coords?

This works for getting the depth buffer for the light’s view (which I’m doing), but it does not work for getting the depth buffer for the CAMERA view.

I project the light’s depth buffer over the camera’s depth buffer, and then subtract the light’s from the camera’s to find which areas are shadowed.

When you use a projective texture matrix, you should have the correct depth value in the 3rd texture coordinate, so there is no need to calculate it…