Omni directional shadow cube maps

Hello,

in the 6800 Leagues Under The Sea Shadows presentation they are talking about Omni directional shadow ‘cube’ maps with reference to Newhall/King. Does anyone know the corresponding paper or how this technique is working?

Thanks in advance!

I also saw that and was baffled after so much googling turned up nothing. Anyone?

I think it is supposed to be in ShaderX 3.

It was more a plug than a reference :slight_smile:

That slide was one from an internal presentation that I gave about deferred rendering (so it was supposed to be very tongue-in-cheek); I was a bit surprised to see it in the 6800 Leagues presentation.

But, as Parveen mentioned, the article will be in ShaderX3.

So gleaning from the presentation:

  • Omni-directional shadow ‘cube’ maps
    ~ Simulate cube map with 2D texture
    ~ Lookup with an auxiliary smaller cube map

I can’t quite understand how this technique is implemented. Is it a specialized technique to work well with deferred rendering?

What information is being stored in the 2D texture and what information is stored in the cube map?

[quote]Originally posted by Parveen Kaler:
[b]So gleaning from the presentation:

major axis
direction  target                          sc  tc  ma
---------- ------------------------------- --- --- ---
+rx        TEXTURE_CUBE_MAP_POSITIVE_X_ARB -rz -ry rx
-rx        TEXTURE_CUBE_MAP_NEGATIVE_X_ARB +rz -ry rx
+ry        TEXTURE_CUBE_MAP_POSITIVE_Y_ARB +rx +rz ry
-ry        TEXTURE_CUBE_MAP_NEGATIVE_Y_ARB +rx -rz ry
+rz        TEXTURE_CUBE_MAP_POSITIVE_Z_ARB +rx -ry rz
-rz        TEXTURE_CUBE_MAP_NEGATIVE_Z_ARB -rx -ry rz

s = ( sc/|ma| + 1 ) / 2
t = ( tc/|ma| + 1 ) / 2

The (rx, ry, rz) are coords supplied to sample cube map, the (s, t) are coords computed to sample single cube face, and the sc, tc, ma are temps. The table above shows just one of many theoretically possible mappings from (rx,ry,rz) to (sc,tc,ma). When we placed cube faces side by side in the 2D texture, we could rotate and flip each face as we liked (it would require adjusting the view matrix while rendering to the texture). So lets design our own, simpler mapping, to get rid of abs(ma) and negative values of rx,ry,rz:

major axis
direction  target                          sc  tc  ma
---------- ------------------------------- --- --- ---
+rx        TEXTURE_CUBE_MAP_POSITIVE_X_ARB +rz +ry rx
-rx        TEXTURE_CUBE_MAP_NEGATIVE_X_ARB +rz +ry rx
+ry        TEXTURE_CUBE_MAP_POSITIVE_Y_ARB +rx +rz ry
-ry        TEXTURE_CUBE_MAP_NEGATIVE_Y_ARB +rx +rz ry
+rz        TEXTURE_CUBE_MAP_POSITIVE_Z_ARB +rx +ry rz
-rz        TEXTURE_CUBE_MAP_NEGATIVE_Z_ARB +rx +ry rz

s = (sc/ma) * 0.5 + 0.5
t = (tc/ma) * 0.5 + 0.5

Now sc = either +rx or +rz, and tc = either +ry or +rz. To do this selection in shader we will use a LUT. Create cube map with single pixel per face (size = 1 x 1), GL_NEAREST filter, RGBA format, and contents as below:

major axis                                  texel value:
direction  target                           r   g   b   a
---------- ------------------------------- --- --- --- ---
+rx        TEXTURE_CUBE_MAP_POSITIVE_X_ARB  1   0   0  0
-rx        TEXTURE_CUBE_MAP_NEGATIVE_X_ARB  1   0   0  1/8
+ry        TEXTURE_CUBE_MAP_POSITIVE_Y_ARB  0   1   0  2/8
-ry        TEXTURE_CUBE_MAP_NEGATIVE_Y_ARB  0   1   0  3/8
+rz        TEXTURE_CUBE_MAP_POSITIVE_Z_ARB  0   0   1  4/8
-rz        TEXTURE_CUBE_MAP_NEGATIVE_Z_ARB  0   0   1  5/8

The ‘a’ component contains horizontal offset to face placed in the 2D texture. Shader code:

vec3 CubeCoord; // == (rx, ry, rz)
vec4 Swizzler = textureCube(OurTinyCubeMap, CubeCoord);
vec2 sc_tc = lepr(CubeCoord.xy, CubeCoord.zz, Swizzler.xy);  // select sc & tc
float ma = dot(CubeCoord, Swizzler.xyz);                     // select ma
sc_tc /= ma;                                                 // projection
sc_tc = sc_tc * 0.5 + 0.5;
sc_tc.x = sc_tc.x / 8 + Swizzler.w;                          // select face
output = texture2D(CubeFacesSideBySide, sc_tc);

Could be tweaked to use 1:4 aspect ratio, allowing better max resolution of faces.

There are a few irritating details regarding texture clamping and virtual cube seams, but MZ has the right idea (my implementation has less per-pixel work to do the mapping).

The reason this is faster than using floating-point textures is that free bilinear PCF on depth textures more than offsets the extra per-pixel work.

The coolest trick this allows (one I only half-implemented in my demo for ShaderX3) is that the faces can be dynamically resized as desired; there is no restriction that each cube face have an equal edge length (or that it exist at all).

Thanks for your answers!

Sorry if this question is strange, but what is the advantage of this technique compared to usual cube maps?

You can use depth textures (and hardware shadow comparison/filtering). This is only relevant on NVIDIA products (I don’t know of any other IHV that supports these), but it’s a pretty nice performance win.