Using existing values in depth buffer for fog calculations

Hi

I recently noticed that the shadows I’m drawing ( zfail stencil, via a transparent quad, drawn over the whole viewport ) are not being affected by fog calculations. No big surprise, since the quad is drawn in front of the scene.

However, this looks funny when shadow casting objects are partially in fog and their shadows are as dark as shadows of non-foggy objects in the foreground.

So, what I was thinking, was that perhaps there might be a way to have each fragment of the shadow quad have fog calculations appied not via the fragment’s depth, but by the value already in the depth buffer. E.g., shadowed pixels up close wouldn’t be fogged, but shadowed pixels far away would be.

In other words, what I’m looking for is a glEnable call or some other mechanism by which the depth component of subsequent fragments are set to whatever value is already in the depth buffer. Sort of like how glPolygonOffset moves the fragment’s depth value, but instead just taking whatever’s already there.

Is this possible? Can it be done in the normal pipeline, or do I have to write a fragment program? I’m hoping for the former, since I’m on OS X and fragment programming is a little backwards there.

Thanks in advance,

You would have to copy the depth buffer to a texture, and then read out of that texture during fragment processing to get the scene depth.

However, the “transparent darkening quad” method produces very inaccurate shadows, because it doesn’t prevent objects behind other objects from first being vertex lit; it effectively darkens back-facing surfaces twice if their corresponding front-face is in shadow (and will be darkened once).

The right way to do it is to render the light contribution of your major light using stencil reject. Yes, this means you need to multi-pass your geometry if you have more than one light, although all lights that don’t need to cast shadows can be merged into a single pass.

Yeah, I’m aware of how innaccurate the darkening approach to shadowing is as opposed to rendering twice, once with ambient light and a second time with lighting enabled.

Unfortunately, the geometry throughput of such an approach seems prohibitive to me – though I think it’s worth giving it a shot, anyway.

What I’m wondering now is if fragment programming might help. I gather, from looking at demo code ( bump mapping with extrusion ) that fragment programs have direct access to the depth buffer. So, what if I had a program somehow inserted ( forgive me, I’ve never actually written a fragment program before… ) such that it would run on each fragment that passes the stencil test. It would then multiply that fragment’s alpha component by one minus the depth buffer value for that fragment’s position. And then it would blend the fragment as usual.

While this wouldn’t be formally correct, it seems to me it would cause the shadows to fade out as they approach the far plane ( which is synced to the fog’s max distance ) and as such would at least prevent the problem from being visible.

Does this sound feasible? I formally acknowledge ignorance of fragment programming.

What I’m wondering now is if fragment programming might help.
It’ll only help on cards that actually support fragment programs (last two generations - geforce fx and up and I’m not sure what ATI cards - 8500 and up???)

The technique you are currently using also can’t handle multiple lights correctly IIRC.

Is there any other method to read the depth value in fragment shader then from previously created depth texture? The gl_FBDepth do not works on my NVidia FX.