3D shadow texture?

Idea Time. I don’t know if this is the right place for this, but lets try.

I was thinking about 3D textures this morning and wondering if there were a way to use them for shadow-casting. This is not a fully formed idea, so just hear me out.

Picture a 3D figure, such as a chair, represented in the volume of a 3D texture. All the pixels are transparent, excpet for some black pixels in the middle of the volume, which make up the 3D shape of the figure. In an aliased, lego blocks - looking fashion.

Now, if this texture volume could be projected from any arbitrary vector, onto a surface, it would be the correct description of a shadow cast by such shape. Adding some blur to the volume’s data, plus linear interpolation on the output, would smooth out the jaggies of it’s aliased nature. Plus it would give the shadow a nice, soft edge.

Problem is, how could this projection be done? Rendering the 3D texture by specifying a slice of the volume that is perpendicular to the projection vector would just render the data that intersects along the slice. Not the whole volume as seen from that vector.

This could be accomplished by rendering multiple parallel slices, sampled at evenly distributed distances through the volume. However, this sounds like it would negate the speed bonus of using a texture for this purpose. Might as well have just projected the 3D geometry itself instead, via classic shadow-casting techniques.

Can anyone see what I am picturing?
A way to apply a 3D texture as a whole, rather than just a slice?
This may just be a dead end, but I thought I would run it up the flag pole and see who salutes.

let’s review :

  1. it will be low resolution shadows as 3D texture take a lot of space
  2. it will be slow to generate the 3d shadow map
  3. it will be slow to do the shadow test

the only advantage is that point 3) is independant from the geometry complexity. It might be useful for very complex scenes, with arealights or many many lights. But with the lack of resolution from the 3d shadow map, I am not sure it is worth it.

A better way would be to use an octree voxel instead of plain 3D texture, but then is it called a raytracer.

True, it would be a low resolution texture. It wouldn’t be useful for sharp, high-detailed shadows. But then, many people have been trying to get soft-edged shadows without having to perform blur passes anyway. That was something John Carmack was unsatisfied about with his 5th generation engine, used in Doom 3.

I don’t think it would be slow to do the shadow test, since there would be 0 setup time per light source.

0 setup time yes but a lot of texture samples needed to ‘collapse’ the texture along the light direction.

It is probably worth coding a bit to see how it holds water :slight_smile:

Yes. That is the part I would like to “cheat” at, if I could.

I wish there was a way to bias and scale the linear interpolation sample range. Something like that could allow a single slice to do the trick, instead of multiple.

This has been done, but better. Depth map texture shadows store depth from the light then project this onto the scene, they then use the projection coordinate r and compare with depth map value to see is a location is in shadow. So instead of projecting a volume onto the scene they project the depth of the illuminated surface, whatever does not match this depth is in shadow.

The seminal paper, by Pixar in 1987 improves on earlier similar z based shadows by introducing percentage closer filtering. The basic idea is that the test between projected r and depth map z is performed before averaging (and there’s some sample pattern stuff in there too).

http://graphics.pixar.com/ShadowMaps/paper.pdf

NVIDIA has a modern whitepaper here:

http://developer.download.nvidia.com/shaderlibrary/docs/shadow_PCSS.pdf

Although from what I can tell that enhances the calculation with a light size estimation for more accurate penumbras.

Intrinsically your volume idea has the potential to deliver better penumbras through complex geometry but would be incredibly expensive to render, others have used layered shadow slices which is similar to your suggestion but more optimal. Check out “Efficient image-based methods for rendering soft shadows.” R. Ramamoorthi, et. al. SIGGRAPH 2000.

Here it is:

http://www1.cs.columbia.edu/~ravir/papers/shadows/shadowslrs.pdf

If i remember correctly quake 3 uses a ultra low 3d texture to set the light level of the non light mapped objects in the game.
It is of cause all precalcylated, but if you had enough texture memory and processing power it could work, it’s just that other methods would still have the upper hand, and light directionality is a problem with this method.

Yes, you remember correctly, but that method was quite ugly in quake 3.
It was indeed not used for shadowing at all, only some kind of diffuse lighting from an averaged direction.
Moreover, the vertical resolution was very low to take less memory space. Is was also used for dynamic lighting of the world, with rockets for example. You could see the linear interpolation from the very lowres 3d “texture” and I found it less visually pleasing than the quake 1 method (a more precise sphere of influence was used). That and the fact that quake 3 dynamic lights were multiplicated with precalculated lightmaps, instead of added, made Q3 a regression on the dynamic lighting scene, compared to Q1.

Back on topic, this remind me a keyword used by Carmack for some early experiments : lumigraph. There are a number of papers on the subject.

Are you saying that each object would have an independent 3d texture representation? I ask because we are dealing with 3d textures and you either have large resource requirements or low resolution textures. It seems like once you have a “scene” you are either going to have to have a HUGE texture or a bunch of smaller 3d textures? If the objects are static, you could potentially use 1 bit 3d textures if the hardware and driver support them OR try using perfect spatial hashing to reduce the object size. These could potentially speedup the process of stepping through the ray in the 3d texture to determine occlusion by keeping more of the texture in cache.

I am not too certain that this would be too slow because parallax occlusion mapping uses the same concept of tracing a ray through a texture. The two things different between these uses is the number of rays traced in a scene and that you are stepping through a 3d texture. most GPU caches and memory layouts are not as optimized for 3d as they are for 2d. Stepping through a ray lying in a single slice of a 3d texture should be just as fast as stepping through a 2d texture. However, once the ray starts progressing to the next slice, you will probably start killing a lot of the benefit of having a texture cache.

Yea the 3D volume texture is just a non directional ambient map unless you use it to modulate a specific light source, in which case it becomes a volumetric shadow texture.

Parallax occlusion iteratively samples a 2D texture to produce an offset fetch. It does not trace through volume, it also benefits from significant cache coherency.

The point of a volume map is it would mean a location shadow result would be a single read.

It is the rendering that requires tracing through a volume with writes to generate the volume and that is an entirely different proposition.

I have not seen a well articulated benefit. Storing the volume surface (depth map shadows) gets you most of what you need and the finer points of penumbrae are a sampling challenge. Unless you have a static scene volume shadows seem to be a step backwards with only somewhat specialized application.