PDA

View Full Version : omni-direction lights and shadow maps



flamz
09-13-2003, 12:54 PM
Anyone know of a good technique (or paper, tut) for implementing shadow maps with omni-direction lights, or point light?

I can only find information on implementing them with spotlights.

tx!
Flamz

SirKnight
09-13-2003, 01:01 PM
You will either have to render your scene 6 times (using a cubemap) for shadow maps with a omni-dir light (each render would use a spotlight) or use dual-paraboloid shadow mapping.


-SirKnight

flamz
09-13-2003, 02:58 PM
tx.
I read up on dual-parabolic env. mapping.

Any good shadow mapping demos out there showing the results of the dual-parabolic technique?

Humus
09-13-2003, 03:23 PM
Personally I recommend the cubemap method, as it's very simple. I got several demos on my site that uses it, for instance "Phong Illumination", "Shadows that rocks" and "Shadows that don't suck". http://esprit.campus.luth.se/~humus/

There's also an article on that site that covers the topic of Phong illumination. Shadowing is included and uses the cubemap shadowmap method.

SirKnight
09-13-2003, 04:24 PM
There is a demo of dual paraboloid shadow mapping on nehe's site under the downloads section. It was based from a demo on one of those delphi opengl sites, I forget which one exactly. What's nice about this technique is that you only need to render your scene two times per light to compute the shadows, instead of 6 with the cubemaps. But I'm sure there are times when it would be better to use the cubemap way, though I'm not quite sure what that is yet. http://www.opengl.org/discussion_boards/ubb/wink.gif


-SirKnight

bunny
09-13-2003, 04:34 PM
Originally posted by Humus:
Personally I recommend the cubemap method, as it's very simple. I got several demos on my site that uses it, for instance "Phong Illumination", "Shadows that rocks" and "Shadows that don't suck". http://esprit.campus.luth.se/~humus/

There's also an article on that site that covers the topic of Phong illumination. Shadowing is included and uses the cubemap shadowmap method.

I'm not convinced that rendering a scene, adding 6 passes per light scales up particularly well in a real application.

Ozzy
09-13-2003, 10:02 PM
As was saying Humus cubemaps are simpler to implement and the result is a bit more accurate than dual paraboloid.
But that's 6 vs 2 passes you're right.
My conclusion implement both and let's the possibility to switch from one to another depending on hw & config. ;-)

Humus
09-14-2003, 06:49 AM
Originally posted by bunny:
I'm not convinced that rendering a scene, adding 6 passes per light scales up particularly well in a real application.

On the other hand, you need to use higher resolution textures with dual-paraboloid to match the quality of the cubemap method. Plus that not all the space in the texture is actually used, some is wasted. Plus that the shaders are more complex with dual-paraboloid. Plus that you might need to tesselate your geometry to work around errors from the non-linear transformation. Plus that the linear filtering in dual-paraboloid shadow maps generates much higher errors than it does in cubemaps.

The only reason to use dual-paraboloid over cubemaps as I see it would be if you're very geometry limited and the additional passes would slow you down. If you're fillrate limited, then definitely use cubemaps. They are faster to use in your lighting shaders and the burden to render to them fillratewise is pretty much the same for an equal quality level.

jwatte
09-14-2003, 08:49 AM
> adding 6 passes per light

You only get 6 passes across the range of the light, which shouldn't be that far. You also don't need to do shadows for all lights, only for lights over some specific brightness within your scene.

Well, unless you have 30 point lights, all with radius 2000 meters. If you have that, you're screwed, no matter what :-)

Getting really good PVS and occluders into your scene graph really helps at this point, btw.

Tom Nuydens
09-15-2003, 03:15 AM
The main problem with DPSM is that you need to tessellate your scene very finely to avoid horrible errors in your depth maps. If your scene isn't well-tessellated already, doing this can make the two DPSM passes more expensive than the six cube map passes. In that case, you might want to do quite the opposite of what Humus said: only use DPSM if you're not geometry-bound.

I would pretty much prefer cube maps in all situations, if it weren't for the fact that cube map depth textures are simply not allowed by the spec. This means you have to resort to some sort of cheat (e.g. encoding Z into RGB with a fragment program as Humus does) to make cube maps work. As a result, you can use DPSM on Radeon 8500/GeForce 3, but cube maps only work on Radeon 9500/GeForce FX.

-- Tom

PS: The DPSM demo SirKnight was referring to is mine, at http://www.delphi3d.net/download/dpsm.zip (GeForce-only, sorry).

Obli
09-15-2003, 11:20 AM
Originally posted by Tom Nuydens:
...I would pretty much prefer cube maps in all situations, if it weren't for the fact that cube map depth textures are simply not allowed by the spec...
Uh-oh. I am pretty disappointed by hearing that. I'll have to use mind something serious to workaround this (taking a look at Humus' projects http://www.opengl.org/discussion_boards/ubb/wink.gif).

Korval
09-15-2003, 12:06 PM
This means you have to resort to some sort of cheat (e.g. encoding Z into RGB with a fragment program as Humus does)

That is not a cheat; that's standard procedure (to me). Or, at the very least, it is a good alternative to ARB_shadow method of doing it.

rgpc
09-15-2003, 06:35 PM
Originally posted by Korval:
That is not a cheat; that's standard procedure (to me). Or, at the very least, it is a good alternative to ARB_shadow method of doing it.

Hmmm, I wonder if there's a way that you could encode 3 sides of your cube map into one RGB (or 4 into RGBA)? Without the obligatory ReadPixels()...

Then could you use each channel as if it were a cube map side. It'd be more efficient than a cube map (memory wise) but I think the steps you'd have to go through would probably be a bit costly (plus I'm not sure you could use the resulting texture in this way).

Alternatively you could store four point lights in a single RGBA cube map...

(Just throwing out an idea - I haven't any ideas as to how this could be implemented)

davepermen
09-15-2003, 10:12 PM
rgpc, normally you need higher precicion depthmaps for shadows than only 8bit, so you use several of the components of the cubemap to store the depthvalues..

theoretically, we could use one simple texture with GL_LUMINANCE and GL_FLOAT32 format.. but no, vendors messed that up http://www.opengl.org/discussion_boards/ubb/biggrin.gif

so we have to encode the float32 into rgba32 and decode it back to float32. happily, this is a quick code, 1 instruction decode, one instruction encode if i remember right. bad is, its lossy. it looses the exponent and only stores the mantissa actually.. hm.. could be done different.. hehe http://www.opengl.org/discussion_boards/ubb/biggrin.gif but thats more costy.

Tom Nuydens
09-15-2003, 11:22 PM
Originally posted by Korval:
That is not a cheat; that's standard procedure (to me). Or, at the very least, it is a good alternative to ARB_shadow method of doing it.

I'm not saying it's a bad thing to do... What was it Blinn once said? "A technique is a cheat that you use more than once"? Cheats are a Good Thing!

But they're still cheats. http://www.opengl.org/discussion_boards/ubb/wink.gif

-- Tom

rgpc
09-16-2003, 02:32 AM
Thanks for the clarification dave. Not having implemented shadow buffers I'm still thinking in ubytes. http://www.opengl.org/discussion_boards/ubb/wink.gif


Originally posted by SirKnight
There is a demo of dual paraboloid shadow mapping on nehe's site under the downloads section. It was based from a demo on one of those delphi opengl sites, I forget which one exactly. What's nice about this technique is that you only need to render your scene two times per light to compute the shadows, instead of 6 with the cubemaps. But I'm sure there are times when it would be better to use the cubemap way, though I'm not quite sure what that is yet.


I've just been looking at the nVidia "Simple render to depth texture" example in their SDK. The precision is hidden from you when you look at this implementation. But from playing around one draw back I could see from the dual paraboloid method is the artifacts would probably be horrendous. The demos on NeHe look OK but they are very small environments (however the first one has very jagged shadows - and it looks like the two paraboloids aren't lined up correctly)

Tom Nuydens
09-16-2003, 10:27 AM
Rpgc, those demos on NeHe are based on my own demo, but browsing through the code it looks like they simply do not tessellate the scene at all. Good tessellation is absolutely crucial for DPSM to work well, so it's no wonder those demos look bad.

JustHanging posted some information about how he implemented DPSM in his engine, and how he tackled the tessellation problem, in this thread (http://www.opengl.org/discussion_boards/ubb/Forum3/HTML/010233.html) .

Oh, and about cube depth maps. When I called the Z->RGB packing a cheat, I didn't want to imply that allowing depth format cube maps would just make the problem go away. It doesn't, because as the ARB_depth_texture spec states, you can't use the R texture coordinate to compare against in the case of a cube map. This means that even if you had cube depth maps, you still couldn't use them on GF3-class hardware, because you'd need a fragment program to get your eye space depth in there somewhere and compare it to the depth map sample.

-- Tom

rgpc
09-17-2003, 03:33 AM
Yes I can see the similarity between the NeHe demos and yours (And I could see where they've just commented out your code for the Tesselation).

One question I do have for Shadow maps is, can you not render directly to a texture using just the ARB extensions? I know you can do it with an NV extension, but I was surprised that this didn't seem to be catered for in the specification for the ARB extensions.

The reason I find that a bit alarming is that I was messing around with my Render to Texture code on the w'end and I found that if I rendered to a pbuffer, then did a CopyTexSubImage() it was HEAPS slower than if I just did the render to texture. I haven't found anything out of the ordinary that might cause this...

Not so much of a problem because I have NV hardware but what do you do for ATI? Is there an ATI equivalent or does the CopyTexSubImage work OK?

[EDIT]

To clarify what I mean by HEAPS I get...

980fps with using the window and CopyTexSub
680fps with render to texture
170fps with pbuffer and CopyTexSub

Having just played around with it, it might be that there's a mismatch between my Pixel formats (from the pbuffer to the window/texture). What's the best way to confirm the pixel formats are the same (I'm asking for the same depths etc. but I know I can't guarantee they are the same)?

[This message has been edited by rgpc (edited 09-17-2003).]

flamz
09-18-2003, 07:49 AM
OK, first, lots of thanks for all your suggestions. This has help me lots.

I've implemented it using the cube map way, dual-parabolics was not accurate enoug for my application.

One more question: When I render the shadow map, I render the scene from the light's point of view in 6 different direction (as if I have 6 spot lights). What should be the FOV angle for these "virtual" spot lights? I was guessing 45.0 (90/2) but I get aligment errors with the top and bottom shadow maps.... anyone know why??

JustHanging
09-18-2003, 08:26 AM
It should be 90 degrees.

Humus
09-18-2003, 12:05 PM
Originally posted by davepermen:
rgpc, normally you need higher precicion depthmaps for shadows than only 8bit, so you use several of the components of the cubemap to store the depthvalues..

theoretically, we could use one simple texture with GL_LUMINANCE and GL_FLOAT32 format.. but no, vendors messed that up http://www.opengl.org/discussion_boards/ubb/biggrin.gif

so we have to encode the float32 into rgba32 and decode it back to float32. happily, this is a quick code, 1 instruction decode, one instruction encode if i remember right. bad is, its lossy. it looses the exponent and only stores the mantissa actually.. hm.. could be done different.. hehe http://www.opengl.org/discussion_boards/ubb/biggrin.gif but thats more costy.

Actually, you can use R32f texture, or even a R16f works fine. It's exposed in DX at least, and nothing in the interface prevents you from doing the same in OpenGL, though I'm not sure the driver supports that yet.