PDA

View Full Version : Dynamic soft shadows



Ysaneya
09-04-2002, 08:29 AM
A few days ago, i had an idea to display per-pixel soft shadows in real-time. I spent a few hours to code it, and here is a first result:

http://www.3ddev.9f.com/smoothShadow1.jpg

Granted, the scene is _extremely_ simple but this implementation, which works as an extension to the shadow volumes algorithm, is only fill-rate limited. In addition, the fill-rate overhead for the soft shadows is independant of the geometry complexity, which means it _should_ still run fast on pretty complex scenes.

You'll also notice a few interesting effects; for example, the higher the distance to the light, the softer the shadow becomes; it's still pretty sharp near the light, as in reality.

The technic doesn't require any fancy stuff except simple pixel shaders. I've had it working on a Radeon 8500, and it could be implemented on a GF3/GF4. Possibly on a GF1/GF2 with the help of combiners, by accepting a lower quality and speed.

Now i'm gonna implement a more complex scene to see if there is no artifact or unaware problems with it.. i'm pretty sure speed will be ok, but you never know.. i'll try to post a demo in a few days (maybe with some code) if i see it's usable.

Y.

PH
09-04-2002, 08:45 AM
Why no explanation of the technique ?

Is it a combination of shadow volumes with a projected soft shadow texture ( as shown in an ATI demo ) ?

Humus
09-04-2002, 08:50 AM
Looks great http://www.opengl.org/discussion_boards/ubb/smile.gif
Looking forward to see a deeper description of the technique.

ToolTech
09-04-2002, 08:57 AM
Do you do it as I do in my engine. I add a number of shadow volumes and blend them. Each volume has a higher value of the stencil buffer and therefor you can add very fast soft blended quads over the entire screen using different stencil layers. Shadow further away gets their edger far away from each other wich gives a soft shadow far away and near the edges are almost the same wich gives sharp edges...
http://www.tooltech-software.com/images/umbra.jpg perhaps not a good picture but anyway..

Ysaneya
09-04-2002, 09:00 AM
No explanation because i want to make sure it works in a "real" scene before starting to explain it.. 500 tris on screen is not very impressive, i'm sure you'll agree.

Which ATI demo are you talking about ? It has nothing to do with softened shadow maps, if that's what you mean..

Y.

Ysaneya
09-04-2002, 09:05 AM
ToolTech: i don't think we do it the same way. I generate the shadow volume only once (which is why i said it's not much dependant of the scene complexity), but i then do some image-based operations with some per-vertex parameters to control the sharpness (hint! hint!).

Y.

PH
09-04-2002, 09:10 AM
This is what I was thinking off,
http://www.ati.com/developer/sdk/RadeonSDK/Html/Samples/Direct3D/RadeonSoftShadow.html

combined with shadow volumes for self shadowing.

Does your method allow for soft self-shadowing ?

PH
09-04-2002, 09:11 AM
but i then do some image-based operations with some per-vertex parameters to control the sharpness (hint! hint!).


Ok, so what about NVIDIA's soft shadow volume demo ( in their Cg browser ) ? Is that related ?

ToolTech
09-04-2002, 09:12 AM
I use the same volume. And then I use the vertex weight method to add a skew matrix for the points at the far side. This way I can interpolate between the near sharp volume side and the far volume side using vertex wights 0-1.

SirKnight
09-04-2002, 09:14 AM
but i then do some image-based operations with some per-vertex parameters to control the sharpness (hint! hint!).


Hmmmm. That is sounding kind of like a theory i have been working on to make stencil shadow volumes soft, with it being the softest at the 'top' of the shadow and sharper at the 'bottom' of the shadow. Just like in that screen. It would be kinda cool if we have both came up with the same technique. Of course, my theory needs some more work, it's not quite at the same point as your method there...yet. http://www.opengl.org/discussion_boards/ubb/wink.gif

-SirKnight

ToolTech
09-04-2002, 09:15 AM
My tech demo in another thread shows the first stage of the shadow algo using only sharp shadows (self shadowing). Will hopefully get up a demo soon that shows the self shadowing soft shadow version. Wanted some feedback on the first demo until I rleased the second..

Zeno
09-04-2002, 09:18 AM
Very nice http://www.opengl.org/discussion_boards/ubb/smile.gif I think you should describe your technique even if it turns out not to work. Perhaps one of us would be able to help work around any shortcomings....

-- Zeno

Ysaneya
09-04-2002, 11:05 AM
PH: no, that's not it. I read it jitters / displaces the light many times to accumulate the shadow into the shadow map, which requires to draw the geometry many times. I don't need that :) And i think my method will work well with self-shadowing, because as i said, it's an extension of shadow volumes, which are automatically handled. However, i haven't tested it yet, which is why i want to try a more complex scene first.. i have tested the NVidia's soft shadow volume demo, but i can't remember how they did it. I'll look at it tomorrow..

ToolTech: i don't think i understand well.. it sounds like you accumulate the shadow volumes to determine the inner / outter borders of the shadow.. am i completely off tracks ?

SirKnight: that sounds nice, if we all come up with a way to implement soft shadows, we might end up finding one usable in games, who knows? :)

Zeno: i'll be sure to describe it even if it doesn't work :)

Y.

ToolTech
09-04-2002, 11:54 AM
Ok. I will try to explain in details.

1. Create a shadow volume out of the object. The volume is a true volume. Not just a siluette.

2. Set the vertex weights for the front cap to 0 and for the back cap to 1. This way I am able to control the slope for each side.

3. Create a secondary matrix that takes its local coordinate system in the z axis aligned from the light source to the center of the occluding object. create a scale in the x,y plane with scale factors 1, 0.9, 0.8 etc. depending on how many iterations to be used ( i use about 5 ). If the lamp source has a big radius then the gap beteen each scale factor is larger. if the light source is small the value gap is small like 1, 0.99,0.98,0.97 etc.

4. Multiply the current transform with this scale matrix and set the secondary weight matrix to the result.

5. for each iteration draw the shadow volume and increase the stencil values each time. for each iteration you should also update the second weight matrix with decreasing scale factor.

6. the result is a combination of mutiple shadow volumes that at the occluding objects boundary are aligned but at the far side they are warped into a smaller scale.

7. Apply for each stencil value a blend and a darkening so that the result of all iterations shall become the fully shadowed color and at the edges where the stencil values are low only a small darkening effect occurs.

Remember to tun of all specular but keep a dimmed diffuse component.

This way I can create umbra/penumbra for small sharp lamps or larger diffuse lamps. All shadows are self shadowing...

Now I got a report that the techdemo on an Athlon 1800 + GForce 4 runs at 250 Hz so I am aiming at 60 hz for good looking soft shadows within a complex environment.

Ysaneya
09-04-2002, 12:18 PM
Hmmm, this isn't very different than the ATI demo. You iterate many times and displace a little the light source, to get a penumbra effect. What confused me is that you said you used the same shadow volume, while you actually render it many times (5!). You also need to apply a full-screen quad for each iteration. I do not need all of this. I draw the shadow volume only once. At the end of the process, i apply only one full-screen quad. On the other hand, my method suffers from small artifacts (which i'll call shadow melting), but i'm trying to minimize them so that they're almost invisible.

Y.

Nakoruru
09-04-2002, 08:53 PM
I dunno if this as already been addressed, but to be perfectly pendantic... Shadow softness is not based on distance from the light, it is based on distance from the shadow caster.

To demonstrate, make a shadow on the wall with your hand, if you move it very close the shadow gets sharp, if you move it away it gets fuzzy. The distance from the light to the shadow never changes.

This may be what your method does, but you just mistated it.

Moving the light away from the shadow caster makes shadows sharper because the light source becomes more and more like a point light.

gaby
09-04-2002, 11:14 PM
This look perfect : please, describe your method.

Sincerly tours,

Gaby

PS : it look like a hoax.

rIO
09-05-2002, 01:33 AM
This looks damn good!
Can U do the test you where talking about and explain more in deep the tecnique ?? http://www.opengl.org/discussion_boards/ubb/smile.gif

The most interesting part is that the shadows doesn't look like hardly changing direction on the front facing plane of the green "thing" receiving the shadow.

I used volume jittering and proj shadows but in both the shadow borders where exactly following the receiving surface inclination.

tarantula
09-05-2002, 05:18 AM
Do you draw into the color buffer (alpha) while setting the stencil buffer? If you do your method might be similar to the one I made up..

Assign alpha values to vertices of the shadow volume.. like 0 at the vertices closer to the light and 1 to the farther ones.

Draw into dest alpha while setting the stencil buffer.Now the alpha value at any shadow point in the color buffer is almost directly proportional to the distance of the point to the corresponding point on the occluder.

If this linear(almost?) alpha values can be converted to some non linear space then we can have alpha intensity that correspond to the darkness of the shadow at the point.

Something like (0.0 ... 0.5 ....0.9..1.0) -> (0.0 ... 0.01...0.3..1.0)

If some transformation like that can be done using the pixel shader(i dunno much about shaders) then we should be able to have soft shadows by drawing a screen aligned black quad with an appropriate blend mode (unless i goofed up somewhere).

[This message has been edited by tarantula (edited 09-05-2002).]

Nakoruru
09-05-2002, 06:40 AM
tarantula,

Your method does not make much sense to me. The alpha values in the shadow will be from the polygons that make up the sides of the shadow volume. This may bear no resemblance whatsoever to the distance from the surface to the occluder silluoette unless you are using an orthogonal projection and the view is perpendicular to the direction of the lightsource.

EDIT: In addition, it is not just distance from the occluder silluoette, it is the distance from the edge of the shadow. Any surface for which the light is completely occluded will be completely in shadow no matter how far away from the occluder it is.

[This message has been edited by Nakoruru (edited 09-05-2002).]

AdrianD
09-05-2002, 06:42 AM
i think this method is similar to the "depth of field" demo form the NVidia SDK.
The shadow is drawn once using stencil volumes into a texture which uses automatic mipmap generation. Then two different LOD levels of this textures are blended together... according to the distance of the pixel to the lightsource (ie. written into the alpha part of the framebuffer)

is this correct ?

if yes, how do you plan to avoid this "melting" problems ? this was the problem why i stopped my researches on this method...

ToolTech
09-05-2002, 07:23 AM
Here is a demo of my technique...
www.tooltech-software.com (http://www.tooltech-software.com)

Ok. Its a bit slow but some day I will get my algo running faster ;-)

/Anders

Ysaneya
09-05-2002, 08:33 AM
Adrian is the closest to the method so far http://www.opengl.org/discussion_boards/ubb/smile.gif Although i'm not using automatic mipmap generation.

To avoid the "melting" problems, my idea is to have the sharpness of the shadow based on the distance to the occluder, then fixed by a factor based on the distance to the viewer. This doesn't remove the melting problem, but greatly helps to reduce it.

Still working on the demo with a complex scene. I hope to get it finished for the next week end.. be patient http://www.opengl.org/discussion_boards/ubb/smile.gif

PS.: ToolTech, trying your demo now..

PPS.: Ok, tried your demo. I don't think it works as expected on Radeons. It looks like a standard shadow volume (shadows still look sharp) which is just a bit darkened compared to lit areas. A color comparison gave me an average value of gray 176 for lit areas and 173 for darkened areas. In addition, overlapping shadows are darker than non-overlapping shadows (value of 170). I'll try it on a GF4 when i'll have some time. Performance was ok. How many tris are there in your scene ?

Y.


[This message has been edited by Ysaneya (edited 09-05-2002).]

tarantula
09-05-2002, 08:44 AM
Nakoruru,
I goofed up http://www.opengl.org/discussion_boards/ubb/biggrin.gif
I was thinking if I can get the alpha values in the shadow region then i could transform the alpha.. but now i guess it isnt easy to get that in the first place and still I think the min or max of alpha has to be found out.

ToolTech
09-05-2002, 08:50 AM
The HW must support vertex weights extension. Otherwize it will fallback on a plain shadow volume implementation.

Nakoruru
09-05-2002, 08:15 PM
tarantula,

I figured that is what you where trying to do. Ysaneya, how do you calculate the distance between the shadowed surface and the occluder's silouette?

One idea off the top of my head is to project a depth map (like in shadow mapping) into the scene and calculate the difference between the occluder depth (from the shadow map) and occludee depth from the light (perhaps from a 1d texture) and store it in the alpha.

The the difference between the occluder and the occludee depth from the light, not the absolute distance from the light that is needed to calculate how blurry a shadow should be.

The bigger the difference, the bigger the light appears relative to the occluder, the blurrier the shadow should be.

Ysaneya
09-05-2002, 11:02 PM
Actually... i don't. It was not a typo, i use the distance to the light for now, not to the occluder. But i'm working on it.. i'm sure there's an answer. I've also been playing with the idea to store it in the alpha buffer, but haven't found a good solution.. yet. I'm not sure the depth map projection idea will work.. if you want point lights, that is.

Y.

davepermen
09-05-2002, 11:06 PM
Originally posted by Nakoruru:
tarantula,

I figured that is what you where trying to do. Ysaneya, how do you calculate the distance between the shadowed surface and the occluder's silouette?

One idea off the top of my head is to project a depth map (like in shadow mapping) into the scene and calculate the difference between the occluder depth (from the shadow map) and occludee depth from the light (perhaps from a 1d texture) and store it in the alpha.

The the difference between the occluder and the occludee depth from the light, not the absolute distance from the light that is needed to calculate how blurry a shadow should be.

The bigger the difference, the bigger the light appears relative to the occluder, the blurrier the shadow should be.

actually, you could do the whole shadowing then.. as i know you like shadowmaps, why don't you setup this? http://www.opengl.org/discussion_boards/ubb/biggrin.gif

tarantula
09-05-2002, 11:30 PM
Are you sure the difference between the occludee and occluder depths from the light decides the blurriness of shadow for any occluder?
What if the occluder is concave towards the side of the occludee? I feel its the silhouette that determines it.. so the depth of the silhouette edges should be interpolated and the difference should be between this and the occludee depth.

[This message has been edited by tarantula (edited 09-06-2002).]

pocketmoon
09-06-2002, 12:34 AM
OK, so with shadowmaps you have a texture containing a depth map from the lights POV.

When rendering you're comparing the depth (in light space?) of your pixel with the depth in your shadowmap.

To get soft shadows you can sample your shadowmap more than once and compare the number of shadowed to non-shadowed results, i.e. ALL shadowed = 100%, 3 shadowed = 75%, 50%, 25%, 0%.

Could we do the first sample at the pixel location and then use the actual difference between two depths to determine how far away we jitter our other samples ? Samples closer to the shadow depth should have further samples jittered more.

hmmmm...horrible incorrect.

Oh well, here's some nice soft shadow papers:
http://www.mpi-sb.mpg.de/~brabec/doc/



[This message has been edited by pocketmoon (edited 09-06-2002).]

davepermen
09-06-2002, 12:46 AM
Originally posted by tarantula:
Are you sure the difference between the occludee and occluder depths from the light decides the blurriness of shadow for any occluder?
What if the occluder is concave towards the side of the occludee? I feel its the silhouette that determines it.. so the depth of the silhouette edges should be interpolated and the difference should be between this and the occludee depth.

[This message has been edited by tarantula (edited 09-06-2002).]

shadows are not blurred at the edges. shadows are blurred everywhere, even in the shadow.. think about it, when you cut a mesh in the middle, the shadow in the middle get blurred as well.. and yes, it is a function of distance to light and distance from light to nearest occluder shading that pixel, that helps determining that blurrinessfactor.

i know of several inexact parts of this solution, for sure, but it is the way i will implement it into my raytracer, as i only need to trace one ray, and i get those values all anyways, so what? http://www.opengl.org/discussion_boards/ubb/biggrin.gif then i do an image space blur depending on those two values per pixel, as well as the screenspace normal and the screenspace depths.. hope i'll get that working nicely.. http://www.opengl.org/discussion_boards/ubb/biggrin.gif

Nakoruru
09-06-2002, 04:29 AM
daveperman,

Yeah, I was thinking that my solution would fit right into a solution that already used shadow maps. It just requires getting the difference and clamping using a fragment program and then storing that difference in alpha and while you store the shadow in the color.

Big problem is that it eliminates one of the major advantages of shadow maps; being able to do more than one light in one pass.

I was wondering about occluder distance versus silouette distance. The reason why occluder distance would work is because if you blur the middle of a shadow it will still likely be completely shadowed because all the pixels samples are shadowed.

I do not think that using the occluder distance or the silouette distance is absolutely correct, but occluder distance is a lot easier.

pocketmoon
09-06-2002, 04:42 AM
Originally posted by pocketmoon:
OK, so with shadowmaps you have a texture containing a depth map from the lights POV.

When rendering you're comparing the depth (in light space?) of your pixel with the depth in your shadowmap.

To get soft shadows you can sample your shadowmap more than once and compare the number of shadowed to non-shadowed results, i.e. ALL shadowed = 100%, 3 shadowed = 75%, 50%, 25%, 0%.

Could we do the first sample at the pixel location and then use the actual difference between two depths to determine how far away we jitter our other samples ? Samples closer to the shadow depth should have further samples jittered more.

hmmmm...horrible incorrect.

Oh well, here's some nice soft shadow papers:
http://www.mpi-sb.mpg.de/~brabec/doc/



What about this :

Get your depth map from light POV, as usual.

Then generate another texture from the depth map using multi-sample edge detection (depth texture is bound to multiple input textures and a filter kernal applied ...) This gives you an fake-penumbra map.

Now do the standard 'am I in shadow' SGI extension thingy but also use a penumbra map lookup to soften the shadow. Big edge values (large changes in depth) should equate to softer shadows.

Who was recently talking about generating shadow volume geometry by reading back a depth map and doing edge detection on the cpu ??

R.




[This message has been edited by pocketmoon (edited 09-06-2002).]

davepermen
09-06-2002, 04:44 AM
Originally posted by Nakoruru:
daveperman,

Yeah, I was thinking that my solution would fit right into a solution that already used shadow maps. It just requires getting the difference and clamping using a fragment program and then storing that difference in alpha and while you store the shadow in the color.

Big problem is that it eliminates one of the major advantages of shadow maps; being able to do more than one light in one pass.

I was wondering about occluder distance versus silouette distance. The reason why occluder distance would work is because if you blur the middle of a shadow it will still likely be completely shadowed because all the pixels samples are shadowed.

I do not think that using the occluder distance or the silouette distance is absolutely correct, but occluder distance is a lot easier.

why does it hurt rendering more than one light in one pass. how does this work at all btw? dunno http://www.opengl.org/discussion_boards/ubb/biggrin.gif (at least, on gf2mx i don't think it changes much http://www.opengl.org/discussion_boards/ubb/biggrin.gif)

it surely is a little more complex than doing hard shadows, but it adds much to the overall result, so what? and i think its faster than multisampling on the shadowmap as well..

yeah, its not 100% correct, but a rather statistical approach wich has only one thing to be: fast and simple. i think it looks great for its simplicity.. and, it will be perfect for raytracing.. on shadowmaps, doing multiple samples and multiple tests (wich is not perfectly accurate as well, but you will not see the difference in 99.9999% of the time http://www.opengl.org/discussion_boards/ubb/biggrin.gif)is quite fast..

tarantula
09-06-2002, 06:12 AM
If you use blurring then using the occluder distance should be fine. But for a different approach it might not work at all.

Nakoruru
09-06-2002, 07:20 AM
daveperman,

The way I was thinking about the problem required multiple passes, but now I see the better solution.

You could do multiple shadow map comparisons from different places on the shadow map. The total area of these samples and would be determined by the distance from the light (in this case, distance from the light is correct).

You would either add these up or average them depending on how you wanted to math to work.

This would be more efficient than rendering the shadow map multiple times from different locations, but of course, it is not as accurate.

If you think rendering the stencel shadow or shadow map multiple times as blurring the light source, this method would be like blurring the shadow casters.

In other words, keeping the light in the same place while moving the world. The problem of course is that these should be completely equivalent, but because it does not actually render the world again, it loses any parallax effects. However, it should work well because the number of samples and the area of the samples is varied per pixel. I think it would look good unless the light is very big or very close to the shadow.

I probably need to break out paintbrush and draw this idea out, because I probably did a horrible job of explaining it.

Ysaneya
09-06-2002, 07:54 AM
Hi all,

I have uploaded a few new images of my current build. Please forgive my poor modelling skills http://www.opengl.org/discussion_boards/ubb/smile.gif

A stupid room that i have modelled (http://www.3ddev.9f.com/smoothShadow7.jpg)

A close-up view to one of the shadows (http://www.3ddev.9f.com/smoothShadow6.jpg)

The scene is a single whole model. Notice the self-shadowing on the tonnels.

Still trying to figure out how to fix the visual artifacts. I also have a strange shadow color popup (sometimes it becomes less dark.. humm). Busy week-end in perspective http://www.opengl.org/discussion_boards/ubb/smile.gif

Y.


[This message has been edited by Ysaneya (edited 09-06-2002).]

davepermen
09-06-2002, 08:16 AM
Originally posted by Nakoruru:
I probably need to break out paintbrush and draw this idea out, because I probably did a horrible job of explaining it.

uhm yeah.. i think that would help much http://www.opengl.org/discussion_boards/ubb/biggrin.gif you did a bad job this time http://www.opengl.org/discussion_boards/ubb/biggrin.gif or is it simply the same problem of not using the same vocabulary for the same things? http://www.opengl.org/discussion_boards/ubb/biggrin.gif cheers..


to Ysaneya: those images look cool

Nakoruru
09-06-2002, 08:17 AM
What hardware are you using?

I hope you will take this as contructive criticism, because these shadows look pretty good (keep in mind that some people on this board seemed to complain very hard about the shadows in Doom 3).

Some of the barrels are not casting shadows, and the shadow of the one on the left does not seem much sharper near the base as it does on the wall.

Looking a trash can in my office which has very soft shadows due to big flourecent lights... The base of the shadow of the trash can is extremely sharp, but then gets fuzzy very fast.

If you zoom in, can you see the shadows get sharper towards to base of the barrels?

Everyone here, includign me, seems anxious for you to get an example we can see for ourselves going this weekend. Good luck!

V-man
09-06-2002, 08:38 AM
Ysaneya,

your FPS looks quite interesting. You faked it or what? http://www.opengl.org/discussion_boards/ubb/smile.gif

What is the scene complexity, # of passes?

great stuff!
V-man

Korval
09-06-2002, 09:48 AM
but it is the way i will implement it into my raytracer, as i only need to trace one ray, and i get those values all anyways, so what?

If you're going to bother with a ray tracer at all, you should do it right and stocastically trace the area of the light volume (shadows are soft because lights have area. A true point-light wouldn't cause soft shadows).

Ysaneya
09-06-2002, 10:15 AM
I used an Athlon 1.4 with a Radeon 8500. The FPS could be improved a lot.. because surprizingly, i'm doing everything in immediate mode with software vertex processing.

All barrels cast shadows, but given the position of the light and the barrels in that shot, they are hidden behind the barrels. However i do agree that the shadow is not that sharp near the base, and it's because (i already stated that) that i used the distance to the light and not to the occluder. By tweaking some of the parameters you can get sharp shadows near the base, but i'm still trying to find a way to get the correct settings everywhere :(

However, even if i don't find a way to fix that, i still believe the shadows are pretty nice. The room looks like realtime high-res lightmaps when you move around.

The scene is not very complex (i guess around 2000 tris), and my technic does use very few passes. A grand total of 3 for now: one to fill the Z values; one to draw the shadow volume; then one to display the scene with diffuse textures + shadows.

Y.

Nakoruru
09-06-2002, 10:48 AM
Ysaneya,

Yes, I said they do look really good, I was just nit-picking. Studying, trying to figure out what is going on.

Daveperman,

It is a problem of figuring out exactly what words to use and how to explain an idea that really needs figures to describe properly. I've come up with a much better description, one that may not even need pictures. I'll post it after I edit it some more.

You would not like it however because it will only work for shadow mapping ^_^

davepermen
09-06-2002, 11:13 AM
Originally posted by Korval:
If you're going to bother with a ray tracer at all, you should do it right and stocastically trace the area of the light volume (shadows are soft because lights have area. A true point-light wouldn't cause soft shadows).

its for realtime raytracing and i prefer to do some nice pixelshading on the gpu afterwards instead of tracing more rays.. so its practically for free.. http://www.opengl.org/discussion_boards/ubb/biggrin.gif

Nakoruru
09-06-2002, 11:43 AM
This is going to be a long post. Its a more coherent (I hope) version of my idea for doing soft shadows on next gen hardware. Of course people will come up with lots of different ways, this is just my idea. Its kinda abstract and general because I wrote it with the idea that I might want to post it with a demo someday.

The usual shadow mapping technique is to first render a scene from the point of view of the light source. Then render the scene from the eye's point of view while projecting the image produced from the light's view onto the eye's view. By projection, it means that the light view image is mapped into the scene as if it had been proejcted from the light source. If you can match a pixel in the eye's view with a pixel from the light's view then that means that the light shines on it. That is because only pixels in the lights point of view actually recieve light. If there is a mismatch then that means they are in shadow.

This will produce sharp shadows because the light source is modeled as an infinitely small point. This means that any surface will always block all or none of the light. Now you see it, now you don't. Soft shadows are produced by real light sources because real light sources can be partially blocked. If from a point on a particular surface, only half a light source is visible, then it only receives half as much light.

One way to model this is to render clusters of point lights. These closely spaced lights will produce multiple shadows which blend together and give a soft effect. The problem is that it greatly increases the amount of work that has to be done proportional to how many lights are in the cluster. 12 lights in a cluster is 12 times the work.

A cheaper way to simulate this effect would be to take the -area- of the scene from the light's view which maps to the eye's view and have the shadow result be the percent of that area which is in shadow when compared to the pixel in the eye's point of view. Taking 12 samples of an area should be cheaper than re-rendering a scene 12 times.

The size of the area being sampled (and the number of samples needed to get a good result) depends on how big the light is from the point of view of the surface being rendered. From the surface being rendered, if you look towards the light, how much of it is being obscured is how much shadow you are in at that point. The closer to the light, the bigger the light will appear (and of course, the bigger the light, the bigger it will appear).

The problem with this approach compared to rendering the scene 12 times is that when the scene is rendered 12 times it is from 12 slightly different points of view. Shadows change shape depending on which angle they are cast from. A profile casts a very different shadow than a shadow cast from the front. However, this should be acceptable as long as the light source is not too big or too close to the shadows because the difference between object sillouettes is not that dramatic.

Also, other approachs which re-render the shadows multiple times do not actually consider multiple viewpoints, but just skew the shadows. Such skewing does not change the shape of the shadow, but only where it is projected. Such methods should be equivalent to this method.

What shape is the sample kernel? Its size is determined by distance from the light to the surface, but what is its shape? The simple answer is circular. This would be the projected shape of a spherical area lightsource. It is easiest because it does not change depending on what vector the light is being cast in. The sample points could simply be evenly distributed in a circle and get good results. Other shapes would require that the 3D cluster of points which define the shape of the light source be projected onto the light view image. Once these points were known, samples could be taken. Very unusually shaped lights could be modeled this way.

That was a pretty abstract explanation. One familiar with shadow mapping might wonder why I did not mention depth values. The fact is that shadow mapping does not rely on depth values, only on finding mismatches between the projected light view and the eye view. Depth values are just the most obvious thing to compare. Other implementations use a single color for each polygon (called index shadow mapping). Mismatches in color result in shadow.

When mapping a spherical area light source, one only needs the distance of the surface to the light to do a good job of approximating the size of the light. This could be gotten by mapping a 1D texture so that it corresponds to how much the filter kernel needs to be scaled.

If one wants to project a cluster of light sources then one needs to be able to determine the 3d coordinates of each light relative to the surface being rendered. One could then project those points onto the shadow map as sample points. It would probably be a fairly involved pixel shader.

I would love to turn all this theory into an implementation. Right now one could implement this on a Radeon 9700 or using nVIDIA's NV30 emulation. I thought about this theory months ago, but at the time I abandoned it as hopeless until more capable hardware came about.

davepermen
09-06-2002, 12:15 PM
i would be interested if you could get your shadows working in tenebrae, that would be very cool. nutty then could implement high dynamic color range, and then we beat doom3 http://www.opengl.org/discussion_boards/ubb/biggrin.gif

Nakoruru
09-06-2002, 11:10 PM
Now is our chance to get ahead of Carmack. The Radeon 9700 is out, but he is too busy trying to ship Doom 3 to do any real research. Of course, he seems to have the ability to implement a whole graphics engine over a weekend ^_^

davepermen
09-07-2002, 04:23 AM
hehe, he does the softshadows over the weekend as well, so what? http://www.opengl.org/discussion_boards/ubb/biggrin.gif
can't wait to get my own r300 http://www.opengl.org/discussion_boards/ubb/biggrin.gif
lets beat carmack..hehe http://www.opengl.org/discussion_boards/ubb/biggrin.gif

Lars
09-12-2002, 04:10 AM
Hi

the weekend is over for some days, but no replies...mmmh

I also tried to do some softshadowing, but still having some problems.
I am rendering the depth values of my objects and the volumes into an pbuffer and do my stenciling there. after that i apply this pbuffer as texture over the original scene and because of smoothing the texture (currently with mipmapping) i get a smooth shadow appearance.
But i have problems getting the info into the alpha channel and blend the lighted scene with the destination alpha.
But that should be solved next week (i have to play ultimate the whole weekend)

That what i could see (the rendering black rectangle where shadow should be variant) wasn't that promissing when going for reality quality cause you don't have the effect that the shadows get softener as the distance between caster and reciever gets larger.
But it looks much better then hard shadows and the performance decrease is very low (cause you can keep the pbuffer smaller then your rendering buffer it is only about 10% to 20% slower). But you get artifacts when moving thru biasing the texture lod.
I think this could be solved by smoothing with an texture shader or the cpu which should result in better smoothing and no need for texture lod biasing.

So what is with your ideas ?
Some info would be interesting even if they are not finished. What problems did you solve and which do you still have ?
Is Carmack beaten ?

Lars

edit: some spelling corrections

[This message has been edited by Lars (edited 09-12-2002).]

Ysaneya
09-12-2002, 09:06 AM
Well, no reply, because i haven't finished working on it yet. But you guessed it, i'm sure :)

So far, i'm still working on my stencil shadows implementation. I'm trying to fix the last bugs, and i implemented per-pixel lighting with bumpmapping. A friend of mine is modelling a 15k tri scene which will hopefully look better than most commercial games. He still has the textures to do, but most of the modelling is done, and let me tell you, it already looks awesome.

I'm gonna start working back on the soft-shadows implementation when the stencil shadows will be perfect, as i don't want to mess up with 2 different problems at the same time.

Now the week-end is over i might as well explain how i implemented my soft shadows. I'm basically rendering the stencil to a 3D texture, then i generate each level by hardware-accelerated blurring of the texture many times. Usually, 3 or 4 levels are sufficient. After that, i can specify a per-pixel or per-vertex sharpness coefficient in the [0-1] range, which will be the 3rd texture coordinate (r). I project the vertices to the screen to get (s, t). I can then modulate my scene by the 3D texture.

The real trick is to find a good way to specify the sharpness coefficient. So far, i've only had success with the distance from the vertex to the light, fixed by a vertex to viewer coefficient, to not make the shadows in the distance blur too much. I'm still trying to find a better way to do it, but for that i'd need the per-vertex or per-pixel distance to the occluder, which i don't have. Any ideas are welcome, but i'll be sure to post the demo when it will be done, whatever the results are.. just be patient :)

Y.

Ysaneya
09-13-2002, 07:50 AM
I have maybe found a solution to the pixel-to-occluder problem.

I assume we are working with finite-radius point lights. We can define a "position" cube around the light center, the dimensions corresponding to the light radius. Given a vertex, it's easy to transform the world-space position to that light-space [0-1]^3 position.

Before drawing the shadow volumes, render the scene a first time to a texture. Specify the light-space position as the color, so that it's being interpolated over the screen. Now, for each texel of that texture, we've got its light-space position.

When drawing the shadow volume, we can do the same with the shadow quads, except that we specify the light-space position both at the front and at the end vertices.

This effectively means that it's possible, using a projection now, to get for a single pixel 2 values: the light-space position of that pixel, and the light-space position of the occluder. Both are in the [0-1] range, so we can imagine to compute a form of distance calculation in a pixel shader. Which would in return give a good sharpness coefficient for the last pass..

I haven't tested that yet, but it should work, shouldn't it ?

Y.


[This message has been edited by Ysaneya (edited 09-13-2002).]

davepermen
09-13-2002, 12:04 PM
if i got you right, its actually what i proposed..

Ysaneya
09-14-2002, 03:04 AM
Don't know, where did you propose that ? I haven't found anything in that thread, except that idea about using a depth-map-like method. This is not what i'm proposing, because 1) i'm not sure it would work well with point lights and 2) you're still limited by the resolution and bits precision of the depth map.

What i'm proposing to do is to calculate, for each pixel of the screen, the position of that pixel in world space, and the position of the nearest occluder in world space too. Then, you can find the distance between these 2 points, which is the distance to the occluder.

Y.

davepermen
09-14-2002, 06:13 AM
i was proposing that you can use the nearest occluder between point and light, and the distance to the light, and use the relation between them to find some value to soften depending on this..

i said that, with shadowmaps, thats much more straightforward to _implement_, but has nothing to do with the technique or proposal i did.

i as well talked about a way to _implement_ that in a raytracer, with the very same method. so two different implementations, same proposal, same idea. and your idea is the same..

blurfactor = f(point-light,nearest_hit(point-light));

btw, you can as well store the surface normals and the depth in screenspace to help blurring into different directions with different strengths as well, hehe http://www.opengl.org/discussion_boards/ubb/biggrin.gif eliptic blur..

Ysaneya
09-15-2002, 02:17 AM
I'm no light expert, but why would you need the distance to the light when you got the distance to the occluder ? My basic idea was to have a term based on the occluder distance, which leads to ugly artifacts in the distance (ie. you're blurring 8x8 or 16x16 pixels at 1 km); so it should be fixed by a term dependant of the distance to the viewer..

For now i got the tolight/toviewer terms working, but it's per-vertex and it's not as good as per-pixel tooccluder/toviewer terms. However, the performance drop is not that bad.. i get a 20 to 30% slowdown in my scene so far. And i haven't implemented vertex shaders yet.

Y.