Packing floats to various bit depth targets

I’m trying to store my normals and depth into one RGBA16f Render target. I want to have 32 bits of precision for my depth and use the rest for my normals.

X and Y Normal values will be stored in the red channel using the following code and i’ll reconstruct Z in the shader using the cross product.


//Thanks Pavel Tumik! @paveltumik for the original code in comments

//pack: f1=(f1+1)*0.5; f2=(f2+1)*0.5; res=floor(f1*1000)+f2;
inline float PackFloat16bit2(float2 src)
{
return floorf((src.x+1)*0.5f * 100.0f)+((src.y+1)*0.4f);
}

//unpack: f2=frac(res); f1=(res-f2)/1000; f1=(f1-0.5)*2;f2=(f2-0.5)*2;
inline float2 UnPackFloat16bit2(float src)
{
float2 o;
float fFrac = frac(src);
o.y = (fFrac-0.4f)*2.5f;
o.x = ((src-fFrac)/100.0f-0.5f)*2;
return o;
}

Now that leaves me with the depth information which i want to store in the Green and Blue channels. I haven’t been able to find any info on packing a 32 bit float into 2 16fs in a shader. Any help would be awesome :slight_smile:

First, I see no reason for you to use RGBA16f texture, just RGBA16 would be more convenient and widely supported.

Next, depth is a float in range [0,1]. Basically, you need to store the first 16 bits into the first sub-channel, and the second 16 bits into another:


vec2 encode_depth(float depth){
const c = float(1<<16);
float a = trunc(depth*c)*(1/c);
float b = fract(depth*c);
return vec2(a,b);
}

I’m trying to store my normals and depth into one RGBA16f Render target. I want to have 32 bits of precision for my depth and use the rest for my normals.

… why? Wouldn’t it make more sense to just use a depth buffer to write your depth to and then render the normal to an RG16f? Or RG16. Or RG8 for that matter; normals don’t exactly need high precision.

First, I see no reason for you to use RGBA16f texture, just RGBA16 would be more convenient and widely supported.

i tried changing my fbos to RGBA16 and my frame rate plummets to 1 frame every 15 seconds when drawing a simple cube.

… why? Wouldn’t it make more sense to just use a depth buffer to write your depth to and then render the normal to an RG16f? Or RG16. Or RG8 for that matter; normals don’t exactly need high precision.

I want to use linear depth for various effects and the greater precision would help, tried using the depth buffer and linearizing it and didn’t get the best results. 8 bits for the normals would be fine but i need hdr on the rest of my attachments. I tried packing my x and y normals into one 16f channel but i couldn’t do it without losing precision. i could only get get 7 bit precision for either the x or y normal which is def. not acceptable. So i ended up placing x normal into the red channel, y into blue, and then placed depth into green and alpha. I later reconstruct the z normal using a spheremap transform. If anybody has another recommendation i’m all ears.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.