Cube map texture coordinates

Hi, im learning to make depth-based shadow maps, and I have a question.

To support omnidirectional lights, the easiest way is render the scene 6 times, creating a depth cube map.

Using a directional light, and a simple 2D depth map, you generate the texture coordinates using the “proyector” model. This is easy. But when you use a cube map, how can i create the texture coordinates? I need only a texture coordinate vector or I need a set of 6 vectors? Can i make the depth test when only a cube map texture access?

Please, if someone knows the answer, or know a tutorial or white-paper, I’ll be very grateful.

API

Cube depth maps are infact easier to do as compared to point light shadow maps. In the latter case, you have to transform your point (that needs to be tested for shadow) into light clip space to get its depth sample. A cube map, however, needs to work in world space. So all you need is a world space light to point vector (easily calculated in vertex shader). You can then use this vector as a lookup for shadow sample in cube map.

Thank you very much for the solution.

I supposed that I only need a texture access, but I don’t figure that the solution was too easy (and obvious, the next time i must think a litte more).

API

Uhmm, as far as I know there is yet no hardware which supports depth cubemaps, or did I recently miss something? In fact, I hope I did, so I don’t have to mess with the whole floating point texture, packing/unpacking of radial distances etc. stuff. I think the conventional wisdom has been to render the 6 views into a large depth 2D texture, and then use a cubemap lookup for the offset into that big texture at rendering time, but depth cubemaps would be much more ellegant :slight_smile:

I guess they do not exist. As for you, I believe that you must render 6 times.

Color CubeMap with Depth encoded in RGBA, giving you a “Depth CubeMap” effectively.
There are a few demos on the web using such a CubeMap with encoded Depth, ask google you might find them.
I believe (not sure) that Humus has a demo using this tech.

Yep, this is the only way to do shadow cubemapping until hardware vendors finally decide to implement real support for depth cubemaps. Rendering z-buffered polygons only is twice as fast as rendering colorized ones, so that’d be definately an improvement over the standard depth-packing solution.

BTW, something related:
I really think we should have cubemaps with different face sizes. Imagine having a light covering a big chunk of a level, getting good shadow mapping results would mean using cubemaps with insace sizes as well as insane filtering. Ok, there are already 512MB cards, but there really are better ways to waste memory and speed. Being able to dynamically change the size of a given cubemap face would ellegantly solve the problem. F.e., if the scene rendered into one face hasn’t got such a large depth variance, decreasing the size of the face wouldn’t lead to such obvios artifacts, and visa-versa.
Sure, we’d have to cope with some tricky filtering issues, especially along the edges of faces with different sizes, but that’s not such a high price IMO.

Perspective Shadow Maps (PSM) try to remedy some of these problems using fixed sized depth maps.
Btw, I blindly assumed that API would be using shaders so i proposed a shader solution. There is no support for hardware shadow mapping for point lights, so you have to use shaders. As for packing depth in an RGB(A) texture, i used the world space distance of light from the vertex scaled by the inverse world space light radius (since objects outside that radius are not lit and are shaded dark anyway), so that it fits in the 0 - 1 range and put it in the alpha channel of the RGBA cube map. The RGB channels are used to store colors of translucent objects for “colored” shadows. I think Humus too has a demo on his site that uses a similar method. However, 8-bits cannot give sufficient accuracy for lights that cover larger area and you can get artifacts. A 32-bit FPU can remedy that to some extent, but for larger area lights an FP16 render target will definitely be necessary. I remember i got pretty bad artifacts, even for small lights on a 9700-pro because of its 24-bit FPU. Does R5xx have a 32-bit FPU? I certainly hope so!

I don’t think perspective shadow CUBEmaps would work out tough…

As for depth packing, I was writing the unscaled radial distance directly into the red channel of a RGB16 floating point texture and was getting VERY BAD artefacts even when the light wasn’t covering such a large distance. Maybe 24bit fp will work better.

I wouldn’t suggest writing the world space radial distance. One thing that worked well, at a little extra memory was writing the fractional part of world space distance in the BLUE channel and then scale the rounded down distance value in the alpha channel. This gives you much better accuracy while unpacking the depth in the lighting shader.

For point light, I use an unwrapped cube depth texture and a cube map as look up table. I initialize each cubemap’s face with texture coordinates of the corresponding part of my unwrapped cube texture. When updating shadow map, I render 6 times the scene with depth only which is accelerated (2 times on nvidia i think) when using real depth texture. Then when rendering scene from camera point of view, i use the light direction with the cubemap, and with the coordinates i got, i look into my depth texture.

When considering soft shadows, i think this method is easier than directly using a cube map, because i don’t need to bother how to get neighbour texels with a cube map. With this method, it is possible to solve the edge problem of cubemap and percentage closer filtering (or other soft shadow multi sampling algorithm). I just render each face of the unwrapped cube with an angle greater than 90. And since i get neighbour texels directly with using offset on my 2d texture and not on the cube map, i don’t have to bother of cube map edges.

I think my explanations are not very clear, sorry i’m not that fluent in english. For those interested, just take a look at the paper of the mad mod mike nvidia demo: http://download.nvidia.com/developer/presentations/2005/SIGGRAPH/Truth_About_NVIDIA_Demos.pdf
It starts on page 130.

Originally posted by HellKnight:
I think the conventional wisdom has been to render the 6 views into a large depth 2D texture, and then use a cubemap lookup for the offset into that big texture at rendering time, but depth cubemaps would be much more ellegant :slight_smile:
Yeah, that’s what I meant. Gotta try it out :slight_smile:

First, thanks all for the tips.

Second, yes, i’m using shaders (GLSL) to implement this algorithms, most of them are easier to implement using shaders.

When i started to learn depth shadows i started with the tipical 2D depth map and a spot light.

I want to go step by step, learning. I know that cube maps don’t support depth format, and that i need to make a “transformation”.

About the next step, to avoid the 6 renderings, i was thinking in something like paraboloid and dual-paraboloid mapping ( i see a good white paper
Stefan Brabec, Thomas Annen, and Hans-Peter Seidel, “Shadow Mapping for Hemispherical and Omnidirectional Light Sources,”). But i see that this dont get a good resolution (one of the problems of depth mapping). What do you think?

Thanks.

API

Dual-paraboloid mapping requires the geometry to be well tessellated and is therefore not particularly well suited for game engines, where there can be a variety of meshes, both extremely low poly (like a flat textured brick wall), and medium poly.

Yeah, and as far as I know you can’t render the world directly into a dual-paraboloid shadow map (even with a fragment shader), you gotta do some transformations on the CPU… Thus rendering a full 360 degrees view of the world must be done anyway.

As far as increasing the resolution of cube-mapped omni-directional shadow maps, a friend of mine came up with a nifty little technique that he calls “skewbe mapping.” Basically, you warp the projections of the cube map faces so that the parts you want (what you’re looking at, basically) have a higher texel-to-world-geometry ratio than the others do. More information is here: http://www.gamedev.net/community/forums/mod/journal/journal.asp?jn=335412&cmonth=8&cyear=2005 (see the August 2nd post)

Hey, that looks interesting. I’ll check it out today.