success with shadow maps?

Have people here reached satisfactory results with the shadow map technic?

Using multiple light sources. What about doing soft shadows with this method?

I still haven’t found a nice clean trouble free way to do shadows. It’s a real pain in the ass.

I’ve come close, but not there yet.

I want them primarily for self shadowing. If I render front facing I get artefacts on the light side, back facing gives artefacts on the “dark” side of the mesh.

Stencil shadows are a PITA. nvidia’s demo has a bunch of artefacts too, but they are arguably less offensive as they simply pop rather than zfight across an entire surface. Doom3 also had these problems. But its hard to judge that product based on some alpha screenshots.

My idea, which I haven’t had time to implement is to render to two depth buffers and then average the results. Put the front facing tris in the first buffer and back facing in the second. This should give you enough of a depth difference to remove those nasty self shadow artefacts.

Multiple lights is “easy”. Just render light A, scene with just A lighting it, Light B, scene with just B lighting with additive blend enabled.

If you’re blending the front and back buffers as I suggested to hide the artefacts then you can get to soft shadows probably. Combine the two buffers again into a new texture, but subtract the back facing values from the front facing values.

Where there is no polys you have 0, where the two are close together, like the edge of an object, you have a number close to zero. Near the centre of an object you have a large difference. You could use these values for how dark to make the shadow using a fragment program or equivilent. While this isn’t perfect, it would hold up in many cases.

we need hardware accelerated ray-tracing To hell with all those tricks, nothing can beats mother nature’s ways.

BTW, on the doom Legacy demo, you can see some kind of soft shadows. Does J.C. also uses shadow maps?

maybe it’s the compression, gotta get that Doom III demo…

Hi,

I’m currently writing a tiny engine to see how well shadowmaps perform in a real game-like situation. They posess a lot of promise as well as lots of problems, my goal is to so see if the problems can be overcome, and how the result compares to the stencil-based approach.

Two main motivators are soft shadows and better adaptivity. Unlike stencil shadows, shadow maps allow for good optimizations for static lights, so in a typical scene with not too many moving lights, they could easily beat stencil shadows in speed.

The most serious problems seem to be, the way I view it, the increased texture memory usage and aliasing issues, which limit the lightsources to a relatively small range.

As for soft shadows, I’m going to try the method I posted here some time ago. For some reason I can’t find the thread anymore. Multiple lights is just a matter of multipass. Anyway, I’ll let you know if I manage to get something together.

-Ilkka

BTW, on the doom Legacy demo, you can see some kind of soft shadows. Does J.C. also uses shadow maps?

Well at least I can remember one place with a shadow that is not stencil based but faked with an animated texure I think (maybe it’s a shadow map but maybe it’s just a projective texture).

[This message has been edited by Zeross (edited 03-09-2003).]

Id have claimed that all their lights are projected textures, and all their dynamic shadows are stencil (paraphrasing).

If you have a fan blade moving, and get shadow from that, that might be made with a projected animated texture – which would still fall under “projected textures” for a light, where the light looks like, well, the inverse of a fan blade moving :slight_smile:

>>>My idea, which I haven’t had time to implement is to render to two depth buffers and then average the results. Put the front facing tris in the first buffer and back facing in the second. This should give you enough of a depth difference to remove those nasty self shadow artefacts.<<<

I’m not sure if this will get rid of artifacts. It might actually cause parts that are suppose to be lit to be shadowed.

The problem with shadow maps is sampling the depth values. Theoretically, I think the larger the buffer, the better the results and also the higher the precision of the depth buffer, the better.

What really sucks is that depth values lose their precision when you put them in a texture. I guess float textures are suppose to rescue us from these problems, but this is not wide spread yet.

There is suppose to be a software solution to the aliasing problem, but that’s very slow.

I don’t understand how you can get artifacts on the rear-facing triangles when you render rear-facing. Assuming you do regular lighting cosine term, so rear-facing triangles get no light, then any imprecision on the shadow buffer WRT the rear triangles doesn’t matter; the triangles are already shadowing themselves.

You do still get artifacts on very thin geometry (like, two-sided polygons turned into two single-sided).

Originally posted by jwatte:
I don’t understand how you can get artifacts on the rear-facing triangles when you render rear-facing. Assuming you do regular lighting cosine term, so rear-facing triangles get no light, then any imprecision on the shadow buffer WRT the rear triangles doesn’t matter; the triangles are already shadowing themselves.

I’m asssuming you mean “rear” as in not facing the light.
Sure you get artifacts and they can be visible but only when normals are not surface normals.
(Also if the ambient term for that shadow thing is not the same as the material ambient I guess)

Originally posted by V-man:
What really sucks is that depth values lose their precision when you put them in a texture. I guess float textures are suppose to rescue us from these problems, but this is not wide spread yet.

So that’s what’s been kicking my ass lately. Any more details on this? I’ve been looking at my self-shadowed chars (GF4), and wondering why the depth-precision seems so crappy even though I’m using 24-bit Z for the depth map.

I’m asssuming you mean “rear” as in not facing the light.
Sure you get artifacts and they can be visible but only when normals are not surface normals.

When are the normals not the surface normals? Bump mapping? They should be shadowed anyway, since they’re actually behind something. A bump-mapped surface facing away from the light should never be illuminated (except by global illumination like ambient or something like that).

So that’s what’s been kicking my ass lately. Any more details on this? I’ve been looking at my self-shadowed chars (GF4), and wondering why the depth-precision seems so crappy even though I’m using 24-bit Z for the depth map.

You may, also, want to avoid using the depth buffer itself, but instead computing the depth yourself and writing that value into the texture, and doing the comparisons yourself in a fragment program/register combiner. The latter way, at the very least, allows you to control the precision. Fragment programs can give you a full 24 or 32-bit comparison.

Not only that, I rather prefer knowing in my program which lights affect the surface and which don’t. That way, I am not forced to outright cull the pixel; I can simply apply ambient light to that pixel. Also, it allows multiple lights to deal with each other correctly.

>>>When are the normals not the surface normals?<<<

When you tell them to
Badam bum bum!

>>>You may, also, want to avoid using the depth buffer itself, but instead computing the depth yourself and writing that value into the texture, and doing the comparisons yourself in a fragment program/register combiner. The latter way, at the very least, allows you to control the precision. Fragment programs can give you a full 24 or 32-bit comparison.<<<

Isn’t it possible to copy the depth buffer directly to a float texture.
Calculating it yourself will be slow. If it was possible to benifit from 64 bit float, then that may or may not justify it.

V-man, Korval, I really don’t see why your depth values would lose any precision at all by copying them into a texture. If you use GL_DEPTH_COMPONENT as the texture format, the texture will be 24 bpp just like your actual Z-buffer. You don’t need depth replace or fragment programs or whatever – just plain old GL_ARB_depth_texture and GL_ARB_shadow.

You should be able to get rid of most of the artefacts by

(a) making zNear as large and zFar as small as possible while rendering your shadow map,
(b) rendering backfaces only, and
© making sure that backfaces are not lit (except by ambient light).

You may still have problems in a few situations, e.g. for very thin objects or objects that aren’t closed, but nothing that can’t be solved by beating up your artists.

– Tom

[This message has been edited by Tom Nuydens (edited 03-11-2003).]

>>>Assuming you do regular lighting cosine term, so rear-facing triangles get no light, then any imprecision on the shadow buffer WRT the rear triangles doesn’t matter; the triangles are already shadowing themselves.<<<

In theory, but with smooth-shaded polygon models you often have backfacing polygons that still have some of their vertex normals pointing towards the light. Those bastards cause trouble with any shadow algorithm, because even if the lighting equation is correctly applied, they still look like crap. I think this is the problem the guy is talking about.

One possible solution is to use the shadow map’s color channel to apply smoothstep to these edges. Unfortunately this causes the shadow edge to shift a little, so thin geometry gets overly dark.

-Ilkka

You could do checks on poly’s true surface normal, not the averaged one. That would mean you need to store it with the vertex or calculate if from verts at draw time if you’re processing walls.

Originally posted by Tom Nuydens:
If you use GL_DEPTH_COMPONENT as the texture format, the texture will be 24 bpp just like your actual Z-buffer.

That sounds good. Looks like it’s possible to go up to 32 bpp.
Is there a demo somewhere using this?

I’m working shadowing too, and am having some success in my application are. In order to soften the shadow edges, it is recommended that the shadow/depth texture be sampled bilinearly. I don’t know how to do that however. The texture_filter4 extension provides a FILTER4_SGIS min and mag option. Is this the best way to do shadow texture sampling?

Originally posted by gstrickler:
Is this the best way to do shadow texture sampling?

Ah, my favourite subject.

I think the best thing you can do with current hardware is to imitate percentage closest filtering by multitexturing or -passing. The idea is to perform the shadow test for each fragment with several shadowmap positions slightly offset from the actual sample position, and average the results from these. The hardware linear shadow filtering, which as itself looks ugly, can help here a lot to remove banding, so you don’t need to sample too many positions. The pcf will increase surface acne a little, so if you are using front face depth, make sure you increase your shadow bias.

The shadow edges can also be softened by adding a soft penumbra into the color channel of the shadow map. The advantage with this, in addition to having a penumbra, is that the shadow maps can be pre-filtered this way so they don’t need as high resolution as they would with pcf, unless you’re dealing with very thin geometry. Have I preached about this enough already? I don’t know if you can add a color channel to depth textures, you might have to use a seperate texture for them.

Of course a lazy bastard would just use the default filtering and hide the jaggies by using noisy textures…

-Ilkka

well, i never implemented shadow mapping, but looking at games like splinter-cell, or the ogre demo, it looks like it can be a very good method. i think better than SV.
i know of two articles that might interest you-
the first is about deep shadow maps, a method from pixar, which i guess cannot be implemented in real time for dynamic lights, but it is interseting.
and the second is about using dual-paraboloid maps to make shadow mapping usefull for point lights. i havent read this one.
i found them on google.

Tom has a demo of dual-paraboloid shadow maps on his website (www.delphi3d.net). As far as i remember it requires an heavy scene tesselation due to the paraboloid projection that is badly approximated by linear interpolation of tex coords. I wouldn’t consider it a viable method at the moment.

Y.