24 bit floating point depth buffer

I would like to see support for a 24 bit floating point depth buffer. It would be especially useful when combined with an inverted depth buffer using normalized device coordinate z values from 0 to 1 set by glClipControl(…, GL_ZERO_TO_ONE). The depth values can be obtained from a 32 bit IEEE float by extracting the lower seven exponent bits and the upper seventeen mantissa bits. See How to setup projectionMatrix/worldMatrix for use with inverted float depth buffer? - OpenGL: Advanced Coding - Khronos Forums

Other than saving 8 bytes-per-pixel (and the accompanying bandwidth), I see nothing to gain by this. Maybe it makes packed float-depth/stencil format more economical.

Also, I rather doubt that hardware can pull off this odd form of floating-point math. Certainly, the Mantle guide suggests that GCN-based hardware can’t do it, since they only offer 16-bit normalized or 32-bit float depth formats.

I don’t see this happening.

As Alfonse suggests, the hardware would have to support it, otherwise the driver would punt you back to a software emulated mode, and becaue it’s the depth buffer that would mean a software emulated mode at the per-fragment part of the pipeline; in other words, 1 frame per second.

There’s not much utility in that.

Secondly, this is the kind of thing that OpenGL explicitly does not specify anyway. Format and bit-depth of the framebuffer is something that has always been left to the windowing system, so it’s highly unlikely that any hypothetical future OpenGL specification would have anything to say about it.

The hardware may need modification to implement my suggestion, but it may be as simple as a microcode change. There are no odd floating point math operations. Depth comparisons the resulting 24 bit values would be identical whether the values were generated from the current GL pipeline or by my suggested method. Reconstruction of a 32 bit floating point value is as trivial as the 24 bit value creation.

The advantage is an optimal distribution of depth values that could support a one millimeter near plane with a size of the universe far plane, yielding the same rendering performance as a 24 bit fixed point depth buffer. Depth accuracy should be equal or better than one part in 2^17.

I agree, it must be supported in the hardware to be useful. The windowing system may restrict the bit-depth of an on screen frame buffer, but I see no evidence the depth format is limited in any way.