projected render-to-texture shadows ?

i read here :
http://www.beyond3d.com/forum/viewtopic.php?t=18053, that UT2x engines model’s shadows were projected render-to-texture.

basically, it enables you to make soft shadows only by bluring the texture (something you cant do with a shadow map, right ?) which sounds nice.

is this a popular methode ? does it have any major drawback (except not being able to do self shadowing) ?

wizzo

basically, it enables you to make soft shadows only by bluring the texture (something you cant do with a shadow map, right ?)

You can blur a shadow map’s edges for soft shadows. Nvidia has a paper and demo on it. I implemented the paper using the nv40 profile in Cg by taking 64 random jittered samples only on the shadow map edge and it’s quite nice. Also according to the guy at NV that did this demo, John Carmack was very interested in this technique of making soft shadow maps and will be doing something very similar in id’s next engine. :slight_smile:

-SirKnight

Alternatively, if you have unique texturing (ala lightmapping), you can render your shadow map… into a lightmap, blur this lightmap, and at run-time re-apply this lightmap over the geometry. A kind of “real time lightmapping”. That’s what NVidia is using in their Dawn demo.

Y.

Here is the demo/paper SirKnight mentioned:

http://download.developer.nvidia.com/dev…le_soft_shadows

this technic seems to give fine result. I dont have a computer able to run the demo right now, so i cant talk about the perf, but this definetly seems interesting from the white paper.

thanks for your answers, ill check it all out later

wizzo

While the technique presented by nvidia is pretty interesting (especially when you consider using branching on the result of a few samples before taking a bunch of them), the performance is definitely not excellent and even pretty poor when shadows occupy a large portion of the screen.

From his E3 address, Carmack did mention he was looking into a similar technique, but for very small numbers of samples (like ~2-8 if I remember correctly). Not sure how he’s looking to pull that off, except maybe for very small penumbras.

Originally posted by paradox-mtl:
From his E3 address, Carmack did mention he was looking into a similar technique, but for very small numbers of samples (like ~2-8 if I remember correctly). Not sure how he’s looking to pull that off, except maybe for very small penumbras.
The guy who wrote this NV demo told me that Carmack is changing what he was originally doing to be more like the NV demo now as Carmack really liked it. I’m sure he will also use more samples then what he said in the video b/c even 8 is no where near good enough. 16 is really a bare minimum. Even then I don’t like the results of 16, so 32 is the minimum I go right now.

-SirKnight

I took a short look at the paper, but to me - if I understood it correctly - it doesn’t look that good.
Soft shadows must be softer when the distance to the occluder increases (and thus sharp when the distance is near) and the darkness of the shadow should also decrease with the distance to the occluder. But all that happens in the demo is that the edges of the shadows are equally blurred.

There are other techniques that look IMHO better ( Brabec’s “Single Sample Soft Shadows using Depth Maps” or Arvo “Approximate Soft Shadows with an Image-Space Flood-Fill Algorithm”) - at least in the good cases…