Precision loss when storing linearized depth value in 32 bit color texture?

Hello,

I have a shader which computes linearized depth values out of the actual depth buffer content
and writes it to a standard 32 bit RGBA color texture (so 8 bits per channel).

float ZDepthBuffer = texture(DepthBufferTexture, TextureCoord.st).x;

float ZLinear = GetZLinearized(ZDepthBuffer);

FinalColor = vec4(ZLinear, ZLinear, ZLinear, 1.0);

It was first meant just for visualization purposes. But later the generated linear data
in this 32 bit RGBA texture was lookuped in other shaders to do some reconstruction
(e.g. reconstruct view space depth for the fragments).

I have the feeling this is not a good idea because when ZLinear is stored in one
channel of the RGBA texture it actually means a lot of float precision is lost, no?
Because there are only 8 bits per channel.

So probably some kind of quantization will happen and the reconstructed results
are probably very unprecise. What do you think?

If my assumptions are correct, what would be the best texture format to store
these linearized depth values without loosing precision.

Thanks!

8 bits = 256 different positions. This is far from few.

Try GL_DEPTH_COMPONENT (and most certainly the better one GL_DEPTH_COMPONENT24 since it will match perfectly your depth buffer). You’ll then get a single component texture so that all the bits could be used to store the depth.

If you do it threw FBOs, here is a tutorial.
You can also have a look at this one.

Hi RealtimeSlave. What you’re saying makes sense. The first question that comes to mind is: What kind of Z range and precision do you need to support? That is, are the results you get from reconstructing 8-bit depth “good enough” (hard to believe with 256 steps)? If so, I guess there’s no reason to change things.

If not though, you might consider storing reversed window-space depth or plain eye-space in a float texture (R32F, DEPTH_COMPONENT32F, or DEPTH32F_STENCIL8). Same space/bandwidth as you’re using now, with much better accuracy. Lots of articles out there on storing reversed depth and the benefits (e.g. link, link, link, link). Log depth buffers also get some mention.

As a fall-back with lower-precision, possibly just capture the window-space depth in standard 24-bit int format (DEPTH_COMPONENT24 / DEPTH24_STENCIL8) or 32-bit int (DEPTH_COMPONENT32). Sounds like that’d use the same bandwidth as what you have now and still produce a much more accurate depth reconstruction.

Thanks a lot Silence and Photon! Your answers confirm what I expected. I will rework some shaders to get precise reconstruction.

You can store depth value in 32-bits. You can refer it here packing and unpacking of depth data.http://www.codeproject.com/Articles/822380/Shadow-Mapping-with-Android-OpenGL-ES?msg=5036073