I am playing around with deferred rendering/shading (whatever).
Currently I am using three RGBA16F render targets for my G-Buffer and it works but is not really fast (GF6600GT). As an optimization I thought about using standard RGBA8 texures instead. I would like to store two 16 bit floats for the normals (x,y and calculate z in the shader). Unfortunately I don’t know how to achieve this (the packing not the calculation of z). Any help is highly welcome.
And while I am at it. Any hints on how to get position from depth and fragment position only would be nice, too.
I think the 2 component fp formats are a nice compromise, but rgba8 reportedly stinks from lack of quality.
Getting a world position from raster/window in just a matter of transforming by the inverted MVPX, X being the viewport and z range stuff tacked on at the end.
at the moment i am using the following code to pack two shadow buffers into one texture, using the red/green for one buffer and blue/alpha for the second. i don’t think its mathematically sound but does seem to provide more precision than 8-bit alone at least.
— in vertex shader, varying vec2 depth —
depth.x = gl_Position.z;
depth.y = 256.0 / depthRange.z; //depthRange.z is the 'range of the shadow buffer'
— in fragment shader, varying vec2 depth —
float fDepth = depth.x * depth.y; // in range 0.0->255.0
gl_FragColor = vec4(floor(fDepth) / 256.0, fract(fDepth), 1.0, 1.0);
and then you can extract the depth again into a 0.0 -> 1.0 value using
float unpack(vec2 value) {
//unpacks a packed shadow map value
return ((value.x * 256.0) + value.y) / 256.0;
}
there is also some code for packing one floating point value into four 8byte channels in gpu gems (nvidia gives that away for free these days) somewhere. It didn’t look quite right to me though.