PDA

View Full Version : Me and my (occluder depth) shadows ...



pocketmoon
10-13-2002, 01:28 PM
I wont repeat myself too much here - nyone interested in the occluder distance attenuated shadows I've been playing with can see previous posts in the not too distant forum history http://www.opengl.org/discussion_boards/ubb/wink.gif

First of all, what happens when you use a traditional single depth shadow map for occluder distance effects:
http://www.baddoggames.com/cg/badshadow.jpg

Notice how the torus shadow wipes out the shadow from the ball and bust closer to the viewer. That's because the only shadow depth stored is the one closest to the light, in this case the torus, so any occluder distance effects will be determined by this occluder distance.


Now, by depth peeling into the shadow buffer we can capture multiple occluder depths. In this case 4 passes fits nicely in an RGBA16 floating point buffer although we could pack 8 depths into RGBA32. (In both shots it's a 512x512 shadow buffer)
http://www.baddoggames.com/cg/deepshadow.jpg

Notice how the arch shadow does not override the ball group shadows, which stay dark.

The effect is stylistic rather than photorealistic and tries to capture the effect of ambient light on shadows. The different shadow strengths on the arch struts looks nice - the arch geometry closer to the ground casting stronger shadows.

If anyone is interesed I'll post some movies, but they take all day to render due the demo being OpenGL1.4+NV30 emu!

R.

Nakoruru
10-13-2002, 08:22 PM
Those shadows look really nice. I am glad that you make clear that you are not going for photorealism, it means that I would have no reason to argue with you, because it would all boil down to our opinions http://www.opengl.org/discussion_boards/ubb/wink.gif

davepermen
10-13-2002, 11:02 PM
it does look nice and not photorealistic, so its what you want.

the last time i thought you wanted photorealism, and for that, it looks.. nice http://www.opengl.org/discussion_boards/ubb/biggrin.gif but buggy.

but its nice anyways http://www.opengl.org/discussion_boards/ubb/biggrin.gif

pocketmoon
10-14-2002, 12:29 AM
Nakoruru, davepermen

I thought adding the non-photorealistic disclaimer would keep you two quiet http://www.opengl.org/discussion_boards/ubb/smile.gif

The occluder distance effect is one that does happen in reality. Sure it all depends on the light source size, location (distance) etc, but given a scene where the light distance and size don't change then the important factor is occluder distance.

One thing I wanted to try and achieve is the realy dark shadow you get very close to the shadowing object - that really helps visually pin an object down to the surface it's sitting on.

Anyway, my depth peeling/deep shadow map technique works http://www.opengl.org/discussion_boards/ubb/cool.gif

R.

gaby
10-14-2002, 05:23 AM
Hum... Your effect is not existing in reallity !
What you are performing is looking like the shadow atenuation resulting of a global ilimunation model... But this exact shadow distance atenuation effect is not looking something real. This effect, in global illumination model, is done by the reflexion of light on surface around lit parts of the scene : but it's not what you compute.

Regards,

Gaby

davepermen
10-14-2002, 05:44 AM
Originally posted by gaby:
Hum... Your effect is not existing in reallity !
What you are performing is looking like the shadow atenuation resulting of a global ilimunation model... But this exact shadow distance atenuation effect is not looking something real. This effect, in global illumination model, is done by the reflexion of light on surface around lit parts of the scene : but it's not what you compute.

Regards,

Gaby

PPPPPPPPPPSSSSSSSSSSSSSSSSSST
he doesn't like to hear that his technique has nothing to do with the reality, pssssssssssssst. just say you like it http://www.opengl.org/discussion_boards/ubb/biggrin.gif

Nakoruru
10-14-2002, 05:51 AM
Gaby,

Pocketmoon did make a disclaimer at the beginning that he was going for an appealing if not accurate shadow ^_^

I now think that his hard coded occluder distance attenuation is actually a pretty good idea. It is a lot like the way that people just throw in an 'ambient light' term. I doubt very many people actually think about how there actually is no such thing as 'ambient light'. What I mean is that 'ambient light' is more of an an aethetic term, than a scientific term.

I think that occluder distance fade is a good hack, which adds in the effects of global illumination the same way that ambient light is a good hack.

Specular lighting is another example of a lighting equation that is completely aesthetic, with little basis in reality. I would argue that Pocketmoon's occluder distance fade is a better model of reality than specular lighting (which makes everything in Doom 3 look like its made of plastic).

Of course, that is just my opinion, and it is also apples and oranges, so I really don't want to argue about it http://www.opengl.org/discussion_boards/ubb/wink.gif

pocketmoon
10-14-2002, 05:52 AM
Originally posted by gaby:
[B]Hum... Your effect is not existing in reallity !

Sorry, are you saying that shadow attenuation does or does not exists in reality? Or are you just saying that my shadows don't look realistic, in which case you, Davepermen and Nakoruru can form a club http://www.opengl.org/discussion_boards/ubb/wink.gif

Of course it's not what I compute. Using Global Illumination I would expect the shadows to look near perfect!

Nakoruru - your last reply is spot on http://www.opengl.org/discussion_boards/ubb/smile.gif
On the scale of HACK to REALITY, the Ambient+Diffuse+Specular lighting model is towards the left and so am I.

Rob

P.S.

I'll have a bash at rendering the Cornell Box http://www.opengl.org/discussion_boards/ubb/smile.gif The image below, and others on the same page, show the close occluder effect at the base of the opaque spheres that I manage to get.
http://graphics.stanford.edu/~henrik/images/cbox.jpg

[This message has been edited by pocketmoon (edited 10-14-2002).]

davepermen
10-14-2002, 06:14 AM
Originally posted by pocketmoon:
Sorry, are you saying that shadow attenuation does or does not exists in reality? Or are you just saying that my shadows don't look realistic, in which case you, Davepermen and Nakoruru can form a club http://www.opengl.org/discussion_boards/ubb/wink.gif


well, teh shadow attentuation does not exist the way you do it at all in reality. but we discussed that yet. and still, it looks neat. i don't like it anyways, as it really is not based on any approximation formula at all.

i prefer softshadows over attentuated shadows. bether attentuate your light and make softshadows. looks far nicer to me... and then the shadow fades out like your one as well, just, fades out by blurring, not by staying sharp http://www.opengl.org/discussion_boards/ubb/biggrin.gif

i don't like it. it looks nice anyways.

ToolChest
10-14-2002, 06:49 AM
Hey, it may not be scientifically true, but those shadows look a lot better than the stencil shadows that seem to be the current fadů http://www.opengl.org/discussion_boards/ubb/smile.gif

Nakoruru
10-14-2002, 08:10 AM
Actually, whether these shadows are soft or attenuated is completely orthogonal, so its not a matter of preference, but of whether you have one, both, or neither of the effects.

I think having just attenuation (like in his first screenshots) is not very realistic. It was so unrealistic that I wondered if he had any idea what he was doing ^_^. But if you look at Pocketmoon's new screens, you will see that they are both soft and attenuated, which is fairly compelling.

Compelling enough that I've changed my mind about what he is doing (a picture is worth a thousand arguments ^_^)

I guess the final words in any graphics debate should be "Show me the screenshots!"

The next generation of cards is going to open up whole new worlds of creativity. It is nice that we can argue about which way, out of many many ways, to do a particular effect, and base those arguements on the way it --looks--. Before we only had one or two ways to acheive an effect, and those ways where handed down by the hardware gods. And if it did not work or look exactly the way we wanted we had to live with it.

I think that for some it will be hard to adjust to there not being just one way to do something.

Nakoruru
10-14-2002, 08:16 AM
Pocketmoon,

Are you actually headed in the direction of using deep shadow maps, created using depth peeling, to implement shadows for transparent geometry?

You needed a deep shadow map to do proper distance attenuation correct? If you already do deep shadow maps, then maybe it would not be a large leap to do deep shadow mapping (at least, shallow, deep shadow mapping ^_^)

davepermen
10-14-2002, 08:43 AM
Originally posted by Nakoruru:
But if you look at Pocketmoon's new screens, you will see that they are both soft and attenuated, which is fairly compelling.

where is soft shadow? (and no, the blurry shadow i see on one image is not a soft shadow. its just a blurred sharp shadow..)

some soft shadow around?

Miguel_dup1
10-14-2002, 08:53 AM
Who needs shadows anyways? http://www.opengl.org/discussion_boards/ubb/smile.gif

We all still live the dark ages; well not quite there yet but we will get there.

pocketmoon
10-14-2002, 09:54 AM
Originally posted by Nakoruru:
Are you actually headed in the direction of using deep shadow maps, created using depth peeling, to implement shadows for transparent geometry?

In fact that's what I wanted to do when I started the Cg demo http://www.opengl.org/discussion_boards/ubb/smile.gif You can implement n-levels of shadows for n-depths you store, so yes, I could have transparent objects each, say, occluding 25% of the light.


You needed a deep shadow map to do proper distance attenuation correct? If you already do deep shadow maps, then maybe it would not be a large leap to do deep shadow mapping (at least, shallow, deep shadow mapping ^_^)
I suppose so, the papers I've seen on 'real' deep shadow maps store functions which represent the changing ligth level between the start (near plane ?) and end of the shadow map.

My next Cg entry uses the technique for something other than shadows http://www.opengl.org/discussion_boards/ubb/wink.gif

R.

dorbie
10-14-2002, 06:25 PM
I think it's a compelling effect. For a physical analog I think it approximates a somewhat diffuse point source with largish indirect contribution too. For example the Sun shining through clouds on a lightly overcast day. Or a light in a room with lots of indirect illumination from reflections.

[This message has been edited by dorbie (edited 10-14-2002).]

gaby
10-15-2002, 12:33 AM
I think this effect is confusing the understanding of the scene, and this is the biggest problem : in very large view, it look cool, but if you are walking in the scene and looking for details shadows to understand volumes and geometry, the shadows are unrealistic. Imagine the doomIII with this shadows : you will have a lot of shadow edge with many diferent shadow level. This description is tipically a multi light shadow combination, not a single light whadow with atenuation. Stencil buffer shadows look realistic because they behave exactly as if the light is infinitly small : this is why DoomIII is based on this simple algo. Your effect of "deep" shadow was never used in CG rendering engine, but real soft shadows where the next step from hard shadows, after what we jump to the GI.

That's only my opinion !

Regards,

PS : excuse me for my bad english ! ;-)

pocketmoon
10-15-2002, 01:16 AM
Originally posted by dorbie:
<SNIP> Or a light in a room with lots of indirect illumination from reflections.

Dorbie understand! Dorbie my special friend http://www.opengl.org/discussion_boards/ubb/biggrin.gif

GeLeTo
10-15-2002, 04:10 AM
Originally posted by dorbie:
Or a light in a room with lots of indirect illumination from reflections.


But this is not entirely correct. While close to the object the shadow is dark because the object blocks much of the indirect lighting this effect vanishes very fast because slightly away from the occulder most of the indirect lighting is not blocked. The linear attenuation from the screenshot looks more like an attempt to fake the real blurring of soft shadows by attenuating the shadow.

[This message has been edited by GeLeTo (edited 10-15-2002).]

davepermen
10-15-2002, 04:33 AM
thinking about some way to do softshadows really easy, i thought, in lightspace they could be more easy to blur (and then to just map back as a texture.. we all know that by rendering a mesh from the lightview black, blur it a little to the headpart (as this one is normally up) and project this onto the ground.. it looks cute)

my idea: we need to get the shadow in lightspace.

so we render the distance from light in cameraspace, and then use this as projected texture in lightspace to determine _there_ the shadow. like that we get directly a shadow texture we can backproject (thought, we'll still need the depthtest there again). but, before of this backprojection we blur the shadows, according to statistical approaches of where is what edge shading how much. that is (me thinks) very simple in lightspace to set up, and could result in quite good results..

anyone? http://www.opengl.org/discussion_boards/ubb/biggrin.gif

dorbie
10-15-2002, 04:47 AM
Dave, I think you need to rephrase. It sounds like you're talking about blurring a depth map, or having some kind of sliced image approach. The original percentage closer filtering paper had lot's of info on this (it basically is how you blur a depth map based on those relationships), there are also other shadow texture (not shadow map) approaches which can convolve the image, but fundamentally they seem incompatible.

So... when you say blur the light space map you still need some test to determine where the map will get applied, you cannot simply project it because there is no test. As such you probably need multiple convolved ligth space maps (shadow *textures*). An approach like this has infact been published.

dorbie
10-15-2002, 04:55 AM
P.S. I see you mention applying the depth test again after the filter. The devil's in the details here when you think about how this test will interract with the filter, then there's the prospect of overlapping filters at different depths, i.e. multiple samples. The fundamental problem seems to me to be that the test to apply the image after you've convolved is also the filter. This is a pretty fundamental issue which will involve full PCS or clipping the penumbra somewhere between deep shadow and full light.

Nakoruru
10-15-2002, 05:37 AM
Gaby, all soft shadows really are is point shadows done multiple times. All you need to do soft shadows after being able to do hard shadows is more speed. i.e., if I have the hardware to do 100 lights casting shadows in a scene, my scene would probably look better if I had 10 lights casting shadows with 10 samples.

Global Illumination (GI) is not really a step or even a jump from soft shadows. They are a completely different beast done in fundamentally different ways than the raster images which hardware is designed to handle.

The main reason for my pessimism is that GI's complexity grows faster than linearly with the number of triangles and lights in a scene. Faster than linear growth is a big enemy of being real-time.

The other reason is the hardware accelerators are fast because they work locally and they stream data. GI is fundamentally different because it needs to randomly access the scene as it distributes energy or traces light rays. The main CPU then is a better place for it.

For these reasons, I do not think GI will ever be a part of OpenGL because it requires a retained mode library, and needs to be computed by something fundamentally different than a graphics accelerator.

Of course, OpenGL is becoming more and more like a retained mode library, but it retains streams, not scene graphs.

(Please do not think that I am saying that you think OpenGL will become a part of OpenGL. I was just stating my opinion.)

I am disputing that it is just a 'jump' from soft shadows. You could probably add less than 50 lines of code to Doom 3 and have soft shadows, and it uses only DX8 level tech. However, it would probably need next years graphics cards to be fast enough ^_^.

The point is that hard shadows just a special case of soft shadows. They are soft shadows with only 1 sample. There is no logical step (or jump) to GI from them.

davepermen
10-15-2002, 06:05 AM
Originally posted by dorbie:
P.S. I see you mention applying the depth test again after the filter. The devil's in the details here when you think about how this test will interract with the filter, then there's the prospect of overlapping filters at different depths, i.e. multiple samples. The fundamental problem seems to me to be that the test to apply the image after you've convolved is also the filter. This is a pretty fundamental issue which will involve full PCS or clipping the penumbra somewhere between deep shadow and full light.

yeah i know. but i think you got what i mean, doing the shadow test in light-space, and render that result into the shadowmap as well is the idea. i know of the devil in the detail, there where the blurring happens actually. thats why i think there the occluder depth from our pocky could be helpful, possibly..

and i'm thinking of some other plans.. what we could render all to the shadowmap as additional info for determining distance to edge, angle of the shadowray around the edge, distance to light, and shadowing-factor due to that.. hehe http://www.opengl.org/discussion_boards/ubb/biggrin.gif

in the end, only multisampling is the real way to go, at least, it looks like that.. but if a simple sample could say more than just true,false, that would be helpful anyways..

pocketmoon
10-15-2002, 06:10 AM
I was pondering some sort of depthmap blurring, but by having a second channel in the depth map which would hold shadow 'alpha'.

First capture the depth map and set alpha to 1.0. Then dilate the depth map *somehow* so that depth values spread out (min depth spreading out over large depths...) and dilate the alpha map so that texels which have been 'dilated' have reduced alpha.

so a 1d depth map for a light source at +10 y and a cube at +5 y above flat line would be:



Pass 1:
Depth 10 10 10 10 5 5 5 5 10 10 10 10
Alpha 1 1 1 1 1 1 1 1 1 1 1 1

Then after dilating:
Depth 10 10 5 5 5 5 5 5 5 5 10 10
Alpha 1 1 .2 .7 1 1 1 1 .7 .2 1 1


(multiple dilation passes required!)

I can see how to dilate the depth map
d[x][y] = min( d[p][q] for p -1:1 q -1:1)
But dilating the alpha at a[x][y] has to be based on wether d[x][y] was replaced. Fragment shader time http://www.opengl.org/discussion_boards/ubb/smile.gif

I have no idea if this would work, but if it was feasible it would need a deep shadow map - if you had an object below the cube but within its penumbra AND below the table it would appear as shadowed by the penumbra rather that the table.

In my current Cg shader I just sample my deep shadow map to get 4 depths, d.xyzw and compare these 4, starting with the 1st occluder (nearest the light stored in w) until I find one the current frag depth is behind. This would have to change to sample the shadow alpha as well and continue checking (.z, .y and .x) if the alpha < 1.0 since the fragment may be inside an occluder penumbra BUT then fully or partially occluded by another surface further from the light.

http://www.opengl.org/discussion_boards/ubb/smile.gif This could be caffeen talking.


[This message has been edited by pocketmoon (edited 10-15-2002).]

[This message has been edited by pocketmoon (edited 10-15-2002).]

gaby
10-15-2002, 08:09 AM
Nakoruru,

Yes, I know all that : I have studie lot of algos that have been made in computer graphics... When I said "jump to GI", I think that if we want to go beyond raytraced soft shadows to enhance picture lighting realism, we must go in the GI way. I know that it is a global aproach, wich is not, at this time, the one of OGL nor raster based graphic accelerated chips. That's not the case with ART hardware, for exemple, or massivelly paralel processing hardware, wich are more adapted to global rendering approach and general purpose architecture. But at this time, this hardware are not used in realtime applications. I hope that in few years, graphics chip will be general purpose oriented, like the 3dlabs one... But solutions might come from an hybride approach...

Regards,

Gaby

davepermen
10-15-2002, 09:01 AM
gaby, i'm currently working on the shift from rastericing to raytracing fully on hw. r300 and nv30 will suck in it, but basically they should be able to implement yet my interface (thought, a full software tracer could implement it far bether, and possibly beat them in speed). the interface is rather simple, and the optimizations a gpu could get to do that whole blindingly fast (well.. realtime http://www.opengl.org/discussion_boards/ubb/biggrin.gif) are yet known.. that the design fits onto todays hw makes it nice to implement it for the start ontop of opengl (2.0) and then letting hw vendors support extensions directly for that renderingway, till we get some real hw.

i haven't written much on paper, thought.. i really need codingholidays again (like a lan party, but coding together http://www.opengl.org/discussion_boards/ubb/biggrin.gif)

Nakoruru
10-15-2002, 09:23 AM
Yes, a hybrid approach is what I imagine as well. Use some highly parallel "GI processor" to solve for lighting, then use the results with a rasterizing renderer. This is how people add raytracing and radiosity to RenderMan. RenderMan has a couple of points in the shading language where results from raytracers and radiosity engines can be retrieved.

Of course, for radiosity this could just be a texture map lookup, just like a lightmap in Quake. A raytracer would probably be a little more difficult.

[This message has been edited by Nakoruru (edited 10-15-2002).]

gaby
10-15-2002, 12:04 PM
davepermen,

Are you agree that a array of DSP like processor that compute lot of ray/triangle intersection with baricenter and phong normal extrapolation per cycle will ever be faster than a raster system like R300 one for raytracing ?

I think yes, because the first one is made for raytracing, not the second one. So, I think that an array of ray/triangle intersection engine combined with an actual texture access mechanism and video memory structure should be a good generic rendering solution, on wich you should make running both Maya or Renderman, or wathever rendering engine based on ray triangle intersection... But it's clear that realtime is not for today ! Now, my hope is that, in few years, companies like nVidia will orient their development in this direction, wich mean that a more programable API with retained mode is build, like HLS goes slowly...

That's my hope,

Regards

Gaby

davepermen
10-15-2002, 12:27 PM
i'm stating even a simple p4 can outperform the r300 with my design
i just staded that my design fits on a r300, wich is a good base to expand future gpu's to have some extensions optimizing that design. there are lots of optimisations possible (on gpu only) that allow to boost the performance of my design by a factor of 10, 100, 1000, 10000000000000000000000000000000000000000 http://www.opengl.org/discussion_boards/ubb/biggrin.gif

no, i mean, getting the amount of needed intersectiontests,rayprocesses,pixelshades down to a minimum can be done quite easily in hw, but not in software..

but the design fits.. like opengl fits around rastericing, no mather how its implemented in hard or software..

and it fits on todays gpu hardware. _that_ is important, as it can get gpu developers to possibly add little gl extensions, that can help tiny bits to boost up the whole.. gpu developers _do_ like raytracing, both nvidia and ati stated they _want_ to do it. one day.. they just scare the big step. i will make that step unneeded..

sounds great, not? well, i feel great currently, so i'm a little babbling. but i'm optimistic about my approach anyways http://www.opengl.org/discussion_boards/ubb/biggrin.gif

if i get support, raytracing in good way in hw, possible in.. 3 years.

waiting to get a creditcard to get a vapochill and waiting for a a p4 3giga to put it into the vapochill and then waiting to get enough money together for an additional radeon9700. then i can set up my tiny api-design-structure for a little demo.. http://www.opengl.org/discussion_boards/ubb/biggrin.gif

davepermen
10-17-2002, 11:03 AM
pocketmoon

what you have perpixel on each shadow is the distance to the occluder, and you fade according to this. how about blur according to this? this is essencially what we talked about in other threads and we thought that would look good.. of course, you need perpixel the surface normal to blur into the right directions and dimensions, but i'm sure you could get that. it would look cool,i bet.. and you could fade them even as well http://www.opengl.org/discussion_boards/ubb/biggrin.gif

pocketmoon
10-17-2002, 11:39 PM
Originally posted by davepermen:
pocketmoon

what you have perpixel on each shadow is the distance to the occluder, and you fade according to this. how about blur according to this? this is essencially what we talked about in other threads and we thought that would look good.. of course, you need perpixel the surface normal to blur into the right directions and dimensions, but i'm sure you could get that. it would look cool,i bet.. and you could fade them even as well http://www.opengl.org/discussion_boards/ubb/biggrin.gif

Yes, I do expand my filter size based on the occluder depth, but not by much. Also the screen shots are low res so it's hard to see the blurring. I think the key, as you say, is sampling in the correct 'dimension'. The first demo sampled in texture space, the current demo samples in screen space (ddx,ddy). Both give artifacts when the scale gets big enough to create a penumbra (rather than just softened edges).I should be sampling in tangent space, but I can see that your suggestion of using the surface normal should work http://www.opengl.org/discussion_boards/ubb/smile.gif

I'm working on a new Cg demo, using shadow volumes for something other than shadows http://www.opengl.org/discussion_boards/ubb/wink.gif so I'll probably return to multi-sample shadow maps after that.

I'm at work now and my home broadband has been out due to the rain! It's back on today so when I get home I'll post the latest shader code and perhaps Nakoruru can have a go at implementing surface normal space sampling http://www.opengl.org/discussion_boards/ubb/smile.gif

Nakoruru
10-18-2002, 05:51 AM
I might have a go at it if I knew what you meant ^_^

I am having trouble imagining what you mean by sampling in tangent space or using the surface normal. Which surface normal? The occluder or the surface being rendered?

It is interesting that you blur in screen space, but woulnt' you end up with weird artifacts like when a character is in the water in Super Mario Sunshine and Starfox Adventures... have you noticed how much screen space distortion/blurring is used on the Gamecube? Must be really easy to use the framebuffer as a texture on that system. Anyway, because of the way they do it, things in and out of the water seem to get smeared in a weird way. I would be afraid that blurring in screen space would smear colors where they do not belong.

I guess you mean perhaps to project the sample kernel onto the surface? i.e., if you drew the sample points onto the surface it would look like it was lying flat on the surface instead of oriented like a billboard?