Camera-space shadow mapping?

I want to run this algorithm by you guys before I try implementing it, just to make sure that this idea actually works.

One of the problems with shadow maps is that they have to be very large, usually, in order for them to generate good shadows (ie, without visible aliasing problems and so forth). This stems from the fact that the shadow map is drawn relative to the light’s position and orientation. It seems to me that this is not necessary.

It seems to me that one can do the following. To generate the shadow map, render the scene from the camera’s perspective as normal, but write the data to a texture. The “color” that you are writing is actually the radial distance to the light source. You will, also, be writing this out as the depth value from your fragment program. That way, fragments will be properly culled based on the radial distance to the light, not the actual z-depth of the pixels. BTW, the texture will be initialized to having some value to be considered “infinity”.

Once you have done this for every light, do your actual render, binding each lightmap texture to all of the rendering primitives. To check to see if a particular fragment is in shadow or not from the texture, compute the radial distance exactly as it was done before. Check this distance against what is in the texture. If the computed radial distance is less than or equal to the texture’s radial distance, then the fragment is illuminated.

Because all the computations are done in programs, there shouldn’t be much of a problem with depth-fighting issues. And, if there are, the shadow maps can be rendered using back-facing polygons just as easily as front (assuming all of the objects are closed volumes). Also, your shadow maps are only as large as the screen; there’s no longer a need to have massive shadow maps. Lastly, luminance (single-channel) floating-point textures can be used on fragment-program hardware to alieviate the issue of precision.

Is there something I’ve overlooked here? It seems way too easy to do (given the hardware, of course), and it seems like a virtually perfect method for doing shadow maps. If it were this easy to do camera-space shadowmaps, I think someone would have figured it out by now.

Interesting idea. It seems like it should work. Here’s some quick thoughts I had:

  1. You wouldn’t need to render your screen-space shadow into color buffer at all…only into depth buffer, right? I think there’s hardware for that and it would save you having to make an off-screen floating point buffer.

  2. You would need 1 screen-size shadow depth texture per light, I think. Would there be any memory issues with this?

[This message has been edited by Zeno (edited 03-20-2003).]

  1. You wouldn’t need to render your screen-space shadow into color buffer at all…only into depth buffer, right? I think there’s hardware for that and it would save you having to make an off-screen floating point buffer.

Well, the problem is that the depth compare, for the actual render of the scene, does not provide the functionality I want. First, if I understand depth textures correctly, you can only have one of them bound at any one time. Also, depth textures cull pixels automatically; they don’t report to a fragment program that can make an intelligent decision on what to do if a test fails.

Either of these necessitates (2n)+1 passes for n lights. Doing it as I have described only requires n+1 passes. The 2n comes from having to render a depth texture, then render with a shader for that particular light. All the lighting is done in a single pass under my method (well, to the extent that all the lighting can be done in a signle pass.

That being said, if the depth texture could be accessed like a regular texture map in the fragment shader, then yes, I wouldn’t need a color texture.

  1. You would need 1 screen-size shadow depth texture per light, I think. Would there be any memory issues with this?

True, but most shadow mapping techniques suggest a 1024x1024 texture per light (and those don’t produce terribly great results sometimes). If you could afford that, you can certainly afford a screen-sized texture. And, if you couldn’t afford the larger one, you might be able to afford the smaller one.

I didn’t realize the restrictions on depth textures. Haven’t really used them much.

If you plan to do multiple lights per pass, you’ll need multiple floating point buffers (1 per light). If you do them one at a time and accumulate lighting results, you’ll only need 1 offscreen pbuffer (with depth component). I guess you’d just have to decide how to trade off between memory and number of passes.

Anyway, your techniqe sounds great…it seems to have all the advantages of shadow maps without the disadvantages. There must be something wrong with it

– Zeno

If I understand your post correctly, then you suggest rendering both passes from the observer’s camera point of view. But the point of the shadow map is to help you resolve visibility with respect to the light source. Rendering the first pass from the observer’s point of view won’t do this, I’m afraid.

Basically, your first pass computes at each pixel the distance to the light source. Then your second pass also computes the same distance to the light source, and when doing the comparison, the depths will be exactly equal, which doesn’t compute shadows!

Eric

Basically, your first pass computes at each pixel the distance to the light source. Then your second pass also computes the same distance to the light source, and when doing the comparison, the depths will be exactly equal, which doesn’t compute shadows!

Hmmm… Good point.

I knew it was too easy.

Too bad it didn’t work out. Actually there has been some research done on similar methods, search for “forward shadow mapping” on google, you should find the paper. It’s quite complicated though, involving IBR style image warping and stuff.

-Ilkka

Korval,

Yes, it’s unfortunate that it’s not just that simple.

There are approaches along these lines that I think will pan out eventually, but good real-time shadows can be a real pain.

Cass