View Full Version : HELP! Depth buffer setup for multiple passes

05-01-2001, 06:30 AM
In my application I need to do multiple passes. What is the correct setup for the depth-buffer to accomplish this, with correct hidden-surface elimination?
Im using GL_LEQUAL in the first pass, and GL_EQUAL in the following passes, enabling the depth mask only on the first pass.

It works well on a Radeon, and in a TNT2. But it doesn't look good on Microsoft Generic GDI driver, worst on the Matrox driver for a G200 card.

In the Microsoft case, I realized that if using a dummy transparent texture for the first pass, with the same mig/mag filters,
the problem was solved: Seems that this way the driver is forced to use the same interpolator so the depth values are the same.

Is there a better approach, that trusting in the well-behavior of the triangle interpolator?

Thanks in advance.

05-01-2001, 06:43 AM
Most hardware should give you z invariance on most every configuration -- unless you are doing something strange with T&L in a subset of the passes.

An example would be mixing fixed function T&L with vertex programs or matrix pallette (sp?) blending.

With software fallback, you never really know what you're going to get.

Hope this helps...

05-01-2001, 09:56 AM
Thanx Cass,

But I'm not doing nothing weird between passes. I'm not even using hw T&L nor vertex programs.
I've fixed the Radeon problem, by increasing the depth buffer from 16 to 32 bits. Now, Microsoft Software implementation is still giving me scrap. But with the workaround I mentioned above, is doing well, with some performance loss, of course.

*BUT*, is this the best way to do multiple-passes?
I mean, GL_LEQUAL for first pass and GL_EQUAL for the all the other passes, with depth writes masked out?

I tried before using PolyonOffset, but their parameters *ARE* implementation specific...