Adaptive Shadow Maps

Has anyone seen an implementation of and/or a tutorial for Adaptive Shadow Maps? The paper explaining them is one of the poorest academic papers for explaining what they are actually doing, and that is saying a lot.

Why do I always have to read a graphics paper 3 or 4 times before I even begin to understand how I might implement the algorithm myself? I thought the point of such papers was to inform :slight_smile:

Give me a power point presentation from nVidia anyday over this:
http://www.graphics.cornell.edu/~kb/publications/ASM.pdf

EDIT: In my search for a better explanation, I found a better algorithim. I was on a mental track that may have lead me to this solution. Glad someone got there before me, there were a lot of details I was not prepared to work out.
http://www-sop.inria.fr/reves/publications/data/2002/SD02/PerspectiveShadowMaps.pdf

[This message has been edited by Nakoruru (edited 07-29-2002).]

I was going to suggest the same paper :slight_smile: It was published in siggraph 2002 (this year). Now all you have to do is figure out how to quickly generate the post projection perspective of the light map and hope it works.

Err… any discussion on this is welcome :-).

an intresting article. but not a word about the actuall implementaion. and if the worked on a dual 1ghz computer, no wonder their cpu intensive method ran cosiderably fast. i dont know, computing the convex hull and all that geometrix stuff sounds slow.

I do not think the convex hull stuff is required. Its is just their method of making sure that off screen occluders get considered. In other words, its just visibility determination. I bet a BSP tree could be used, and would be considerably faster, even if its not as exact.

For the most part, the algorithim is completely hardware accelerated. And as for speed, they are graphics researchers, not game programmers (I think), what do you expect :slight_smile:

I bet this method runs at least as fast as stencil shadows. I would love to see a comparison with stencil shadows on that 23 million polygon scene. Although that is not completely fair since stencil shadows cannot handle point clouds.

This reaffirms my opinion that stencil shadows are Doomed.

Can anyone tell me how do I store the polygon IDs in the frame buffer? I read the author used polygon ID in the paper.
I tried to assign the IDs instead of the color values but when I read it from the Buffer, the value was changed. Here are partial of my code, Thanks a lot…

float idArray[40000];
int polygonID = 340; // example number

glDisable(GL_LIGHTING);
glBegin(GL_TRIANGLES);
glColor3f(polygonID0.001,polygonID0.001,polygonID*0.001);
glVertex3f( 0.0, 50.0, 0.0);
glVertex3f(-50.0, -50.0, 0.0);
glVertex3f( 50.0, -50.0, 0.0);
glEnd()

glReadBuffer(GL_BACK);
glReadPixels(0, 0, 200, 200, GL_RED, GL_FLOAT, &idArray);

int resultID=(int)(idArray[20000]*1000);

Originally posted by ejeng:
[b]I tried to assign the IDs instead of the color values but when I read it from the Buffer, the value was changed. Here are partial of my code, Thanks a lot…
[…]

glColor3f(polygonID0.001,polygonID0.001,polygonID*0.001);
[…]
[/b]

You cannot convert from ID to color just by multiplying by 0.001, you have to retrieve the colorbuffer bitdepth (glGet). Even if you do that, using the colorbuffer is a very problematic way of assigning IDs, as it can cause non-representability errors (your ID is not representable in the colorbuffer) and roundoff errors (if the geometry is clipped the color value may shift to a different ID).

In short, use the stencil buffer to assign IDs.

[This message has been edited by evanGLizr (edited 08-04-2002).]

> http://www-sop.inria.fr/reves/publications/data/2002/SD02/PerspectiveShadowMaps.pdf

That’s one of the most exciting papers from this year’s SigGraph, IMO. I read the pre-print (to get a jump on the people going to San Antonio :slight_smile: and it seems fairly robust.

Couple that with the trick of rendering the back-facing surface into the depth buffer for shadow mapping, and shadow maps do indeed look attractive.

The major draw-back of shadow maps is either the need for Very Large maps, or accepting that you’ll have to fade shadow maps out when you get too far from the camera.

evanGLizr:

Thank you for the reply.
After trying hard to make it work, I found that Stencil Buffer is not working in my machine. Is there any other method that I can assign triangle IDs to the frame buffer without hardware acceleration?
Thanks again…

evanGLizr:

Sorry! I found that my machine do support Stencil Buffer.
I implemented triangle IDs and it works. Thanks…

Originally posted by ejeng:
[b]evanGLizr:

Thank you for the reply.
After trying hard to make it work, I found that Stencil Buffer is not working in my machine. Is there any other method that I can assign triangle IDs to the frame buffer without hardware acceleration?
Thanks again…[/b]

It’s strange that your OpenGL implementation doesn’t support stencil, even the Microsoft software implementation has 8bits of stencil :?.
FWIW, in some NVIDIA cards you have to be using 32bpp colorbuffer to be able to get stencil.

Anyway, if you have to use the colorbuffer for polygon IDs, then you have to retrieve the bitdepth with glGetIntegerv(GL_RED_BITS, &dwCompDepth) and then set the color to ((float) ID / ((1 << dwRedDepth) - 1)) using flat shading. The calculation assumes that your color buffer uses fixed point format and that ID cannot be bigger than (1 << dwRedDepth) - 1.

Note that you should add a bias to that color in order to try to avoid roundoff errors in clipped geometry. Chances are that this is not needed because you use flat shading, but you never know if a graphics card implements flat shading as smooth shading with the same color for all the vertices and, hence, performs the color interpolation as normal, being prone to roundoff errors on clipped geometry.

You could also concatenate several color channels in order to have a larger polygon ID range, but note that the bit depth of each color channel may not be the same (if you are in 16bit colordepth 565 mode, for example).

Just remembered: There’s a demo at NVIDIA http://developer.nvidia.com/view.asp?IO=Shadow_Mapping_ogl on indexed shadowmaps. Note that this demo stores the polygon ID in the alpha channel and uses smooth shading without adding the color bias per vertex, so it may go wrong depending on the graphics card (roundoff errors in clipped geometry from the light viewpoint).

[This message has been edited by evanGLizr (edited 08-05-2002).]

i dont think shadow volumes are doomed. actually, if you think their problem is quality and not speed, than i think than coupled with post processing of the shadowed sence the shadows can be made soft, and without jittering the light. but they are indeed slower than shadow maps. but when you need to dynamically update the shadow maps of many lights, and maybe cube maps, you can raise the fill rate to be the
bottle neck, and than you are better done with shadow volumes. i think shadow maps are greate for an out-door scene. look at the SpeedTree demo.

Originally posted by okapota:
bbut when you need to dynamically update the shadow maps of many lights, and maybe cube maps, you can raise the fill rate to be the
bottle neck, and than you are better done with shadow volumes.(…)
[/b]

the most limiting factor with stencil shadow volumes is fillrate(with todays hardware - even with a gf3/gf4). if the creation of your shadowmap has fillrate problems then your shadow volumes will consume even more fillrate.
so how are they supposed to be faster than shadowmaps ???

imho, it’s definitely faster to generate shadowmaps than shadow volumes(SM:just render flatshaded polys in the depthbuffer: really easy SV:siluette detection, capping problems, render every volume twice), but the big problem with shadowmaps was the shadowmap-resoulition problem. (and ofcourse depth-buffering-alasing problems when the nearplane was to near)

this problems seems to be eliminated with this new method… and this could be the end of shadow volumes.

Are shadow maps also faster than shadow volumes when using point lights? For one point light you need to render the shadow map 6 times. I can’t imagine that this is faster than drawing the volumes but I haven’t tested it. Does anyone know more about the performance?

because of that, a new method for doing shadowmaps is introduced in this new siggraph-paper.
that’s what we are talking about, right ?

this method sounds pretty cpu intensive. and they didnt explain exactly how it works, how to use this post-perspective space.

The technique sounds really nice but of course like in nearly every graphics paper they don’t have any implementation details.

Does anyone know how exactly the geometry and the light source are moved into post-perspective space? Does that mean that you have to manually multiply the modelview matrix with the projection matrix to bring your geometry in that space?

And when the shadow map is created, how is it projected onto the scene for a point light?

Thanks
LaBasX2

More information about the calculation of the post perspective matrix can be found here:
http://www-sop.inria.fr/reves/research/psm/

great. looks very good, i think i’ll try it out tonight.

In the paper they said that transforming to post perspective space is a linear transformation and can therefore be done in a 4x4 matrix, so it can be hardware accelerated.

Why have two people now said that this sounds CPU extensive? It seems completely hardware accelerated to me.

I see the convex hull aspect of this paper as just them being complete and scientific. They stated that so and so geometry, in a certain area, is all that is capable of casting a shadow. You should take away from that that any visibility determination method which includes all this geometry will give correct results and does not have to be exact or slow.

The reason why shadow volumns are doomed is because shadow maps are very general. Shadow maps will shadow anything that can be rendered to the framebuffer that is opaque. You do not have to write special code to shadow point sprites or deplacement maps or curves. You just draw your geometry and it will be shadowed.

The other big reason is that on next generation hardware you should be able to write pixel shaders that do multiple lights in one pass. How do you write a stencil shadow algorithm that fits that scheme? Any method I can imagine begins to look a LOT like shadow maps (creating textures from stencil buffers), but with a lot of added complexity.

Would I be correct in saying that this method only requires one shadow map be generated per light for all types of lights? Directional, Omni, and Spot would all just require one shadow map per frame right?

Okapota,

Just in case my point got lost in the length of my previous post. I think that perspective shadow maps see spot lights and omni lights as just point lights using exactly the same math and setup.

Spot light fall off would have to be handled using a texture as a filter while omni lights would have no filter. Since they did not mention having to use cube maps, and its obvious they did not, I am assuming that this method will not require cube mapping.

This method also handles directional lights, which I have never seen done using shadow maps at all.