I’m implementing omnidirectional shadow mapping using cubemaps. Since I can’t create a depth cubemap on my hardware ( Macbook Pro, ATI x1600 ) I’m packing fragment depth into the cubemap’s RGBA color channel. This seems to be pretty common, and googling for how to do this swizzling got me a lot of discussions on best approaches. I’m using the approach detailed in this thread, because it makes sense ( no magic! ):
http://www.gamedev.net/community/forums/topic.asp?topic_id=486847
Here’s my code for packing depth to color and back. (Note, my depths are normalized [0,1])
#define DEBUG_PACKING 0
vec4 FloatToFixed( in float depth )
{
#if DEBUG_PACKING
return vec4( depth, depth,depth,1 );
#else
const float toFixed = 255.0/256.0;
return vec4(
fract(depth*toFixed*1.0),
fract(depth*toFixed*255.0),
fract(depth*toFixed*255.0*255.0),
fract(depth*toFixed*255.0*255.0*255.0)
);
#endif
}
float FixedToFloat( in vec4 shadowSample )
{
#if DEBUG_PACKING
return shadowSample.r;
#else
const float fromFixed = 256.0/255.0;
return shadowSample.r*fromFixed/(1.0) +
shadowSample.g*fromFixed/(255.0) +
shadowSample.b*fromFixed/(255.0*255.0) +
shadowSample.a*fromFixed/(255.0*255.0*255.0);
#endif
}
When DEBUG_PACKING is zero ( in principle, 32-bit depth precision), I get the following junk:
However, when storing depth using only 8-bits ( the red channel ) I get good output:
Since the 8-bit low-precision version renders correctly, I can infer that the rest of my omni shadow pipeline is correct ( or at least, not broken too badly ).
So, can anybody tell me what’s wrong with the 32-bit precision version? I’m a little baffled…