depth buffer precision

(Sorry, I know it’s a bit OT)

I have to deal with the big problem that the ATI/NVIDIA cards I know about do not support a 32Bit depth buffer (exluding stencil buffer).

Are there any tricks to get higher precision at higher view ranges with the available 24Bit depth buffer, except moving the clipping planes?
There is no general W-buffering implementation AFAIK, right?
And sorting geometry is out of question too.

Is it still true that you can run in depth buffer problems if you calculate your own projection matrix instead of using glFrustum?

Anyway, I really don’t understand that we get stuff like full precision fragment programs but are stuck at 10 year old 24Bit depth buffer precision maximum!

If adjusting near/far plane isn’t enough, you can always render two (or more) frustums per frame, the farther frames first. Notice this does not necessarily require sorting geometry. Hopefully you won’t have a lot of overdraw as long as you’re doing frustum culling. You might have to worry about some artifacts that can happen between frustum boundaries, mostly due to lower precision at far planes.

I’d also be interested in knowing a solution other than using multiple frustums, if anyone knows.

Yes, this is something I considered too.
But it would cost too much fillrate because it hinders z-occlusion culling.