Guys,
Trying to pack a float into a GL_RGB ( I’m doing this because iPhone 3GS does not support render to depth texture).
I’m aiming for 24bits precision, creating the texture as follow:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 320, 480, 0, GL_RGB, GL_UNSIGNED_BYTE, 0);
packing with the following:
Fragment shader
precision highp float;
varying vec4 position;
void main(void)
{
float normalizedDistance = position.z / position.w;
normalizedDistance = (normalizedDistance + 1.0) / 2.0;
const vec3 multCoef = vec3(1.0, 255.0, 255.0 * 255.0 );
vec3 packedFloat = multCoef * normalizedDistance;
gl_FragColor = vec4(fract(packedFloat),1.0);
}
Where position is the screen space position of the vertex from light POV.
Unpacking as follow:
float unpack(vec3 packedZValue)
{
const vec3 multCoef = vec3(1.0,1.0/255.0, 1.0/(255.0 * 255.0));
return dot(packedZValue,multCoef);
}
float getShadowFactor(vec3 lightZ)
{
vec3 packedZValue = texture2D(s_shadowpMap, lightZ.st).rgb;
float unpackedZValue = unpack(packedZValue);
return float(unpackedZValue > lightZ.z);
}
void main(void)
[..]
vec3 lightZ = lightPOVPosition.xyz / lightPOVPosition.w;
lightZ = (lightZ + 1.0) /2.0;
float shadowFactor = getShadowFactor(lightZ);
[...]
}
Where lightPOVPosition is the screen space position of the vertex from light POV.
Unpacking is getting me garbage. It works OK if I don’t pack/unpack and only create the shadowmap using one channel of the texture (If not for the unaccuracy).
I wonder if GL_RGB is actually 24 bits on iPhone (can it be different if I set the screen rendering surface as kEAGLColorFormatRGB565 ? Even though I’m rendering to a FBO ?
Is my packing/unpacking algo flawded ?