Shadows + Transparency?

Hi all,

I remember seeing a demo, from Quake3 I think, where there was a light source, a creature, and a stain-glass window between them. There was a shadow cast from the edges of the window, and the colours of the window were projected onto the creature too.

I can’t find any info on the web though about how to do this. If it was Quake3 then they’d be using shadow volumes (and I believe it was pixel-accurate). I understand that method for shadowing, but can’t see how to extend it to include coloured transparent geometry, which should cast a coloured shadow.

Maybe it was a hack, with the window acting like a second light source and projecting a pre-computed texture, extending the volume from the light source, but occluding the original light.

I’d love to know if there’s a way to do it for arbitrary transparent geometry though, without requiring an additional render pass for every transparent polygon.

Any thoughts? Anyone know how that Quake 3 demo worked (if indeed it was Quake 3).

Thanks,
Rob.

Are you thinking about this Unreal 3 engine demo?

http://www.unrealtechnology.com/screens/SoftShadows.jpg

If so, you just render the scene with whatever shadow algorithm you like and when you encounter a lit pixel you project the stained glass texture on it.

To recap, it’s just a projected texture combined with a point light.

/A.B.

Ah thanks, yes that’s it! I knew it was something 3 :slight_smile:

OK, so it’s kind of the simple version I described (maybe not very well).

Any thoughts on how to do this more generally? I have an arbitrary collection of geometry, each polygon of which may have coloured transparency, and should cast a coloured shadow on all other geometry. Geometry can move, light source can move, all in real time?

I don’t ask for much, I know :slight_smile:
Is it at all possible?

Only way I can think of is to make each transparent polygon cast a complete shadow with respect to the original light source, and to project a new light/texture out from that polygon, heading away from the original light source.

So each polygon casts a shadow from the original light, but also becomes a new light.

It already doesn’t sound very real-time, as each transparent polygon effectively becomes a new shadow-casting light.

Gets trickier than that too. Consider two coloured transparent polygons A and B, and one light source. B may fall partly into shadow from A. So the new light emitted from B must be partly the colour of B and partly the colour of A+B (I know it’s not “+” mathematically, but you know what I mean). Some sort of hierarchical use of the stencil buffer might make this doable, but again, lots of passes, not very real-time.

I should probably not try, right? :slight_smile:
Thanks,
Rob.

Here’s an algorithm outline:

  1. Clear the framebuffer with the light’s color
  2. For each transparent triangle create a stencil shadow mask
  3. Multiply the framebuffer with the triangle’s transparency color while masking with the shadow mask

If a list of triangles with them same transparency color doesn’t overlap when seen from the lights perspective you can render them together to speed things up.

The opaque triangles are handled by giving them black as transparency color and you render them with one separate call.

When you create your shadow masks only the opaque geometry should be in the depth buffer.

If you have many lights you will have to render to a texture and accumulate the result as you go.

/A.B.

It is quite easy… in unit 0 put shadowmap in unit 1 put light texture (nice mosaic like in u3 screenshot), place modelview * projection from light source into texture matrix and render shadow affected faces. Looks like classic projective texture mapping

yooyo

It just hit me that you can extend this solution from being per triangle transparency to per pixel transparency. New algorithm outline:

  1. Clear the framebuffer with the light’s color
  2. For each transparent triangle create a stencil shadow mask
  3. For each pixel in the framebuffer transform the pixel’s clip space position into the projected texture space of the triangle and multiply the framebuffer with that value (you will need a fragment shader to do this).

/A.B.

Brinck is not totally right about Unreal 3 method, it’s actually based on cube mapping. We discussed it here some time ago :
http://www.opengl.org/discussion_boards/cgi_directory/ultimatebb.cgi?ubb=get_topic;f=3;t=012341

And Humus made a demo using a quite simple solution based on shadow mapping (certainly what yoyo was reffering to) : http://www.humus.ca/index.php?page=3D&ID=39

SeskaPeel.

Brinck is not totally right about Unreal 3 method, it’s actually based on cube mapping.
Sure I am :wink: I actually saw this demo behind closed doors and talked with some of epic’s guy’s at last year’s GDC.

In the first part of the demo a lantern was moved around inside a castle and they showed of two effects:

  1. Distance attenuated shadows. This was as you said simply a distance based blend between two cubemaps, one blurred and one sharp. As far as I could tell only the shadows cast from the mask on the lantern was blurred an not the shadows cast from the other geometry.

  2. The parallax effect on the walls.

In the demo with the mosaic lighting they didn’t not use any distance blurring of the shadows, however the shadow buffer was oversampled so the shadows looked pretty smooth anyway.

/A.B.

Are you thinking about this Unreal 3 engine demo?

http://www.unrealtechnology.com/screens/SoftShadows.jpg

If so, you just render the scene with whatever shadow algorithm you like and when you encounter a lit pixel you project the stained glass texture on it.

To recap, it’s just a projected texture combined with a point light.

/A.B.
I was saying you were wrong about this statement. The stained glass texture is a cube map, not a 2D one. Now you could say the cube map is “projected”, but any cube map is …

SeskaPeel.