PDA

View Full Version : glPolygonOffset() vs glDepthRange()?



ugluk
01-08-2011, 08:32 PM
I (fortunately) find myself in a situation when I can use either of these to prevent z fighting. For glPolygonOffset() I'd have to do:

glEnable(GL_POLYGON_OFFSET_FILL);
glPolygonOffset(...);
glDisable(GL_POLYGON_OFFSET_FILL);

but for glDepthRange() I need to do:

glDepthRange(x, 1); // x > 0
glDepthRange(0, 1);

That's 2 calls vs. 3 calls so I went with glDepthRange(), but maybe my thinking is wrong? Which is usually better to use? (I know the bench has final say.)

mhagain
01-09-2011, 04:42 AM
It shouldn't matter, but I'd advise against polygon offset as it's behaviour per the spec and documentation is allowed to be implementation-dependent, so a set of values that work on one 3D card won't necessarily work on another.

In both cases you should also be aware that the depth buffer is non-linear so that adjustments to polygon depth at the near end of it will be different to adjustments at the far end. You can implement a linear depth buffer using a vertex shader but it kinda sucks really bad as you lose an awful lot of depth precision closer to near, which causes very visible rendering artefacts.

Another approach is to adjust your projection matrix a little.

ugluk
01-09-2011, 05:15 AM
Even with the nonlinear 1/z depth buffer, the depths I am adding to have already been clamped to the [0, 1] interval, so anything I add/subtract, if not too small and not at the far or near plane depth, ought to have an effect and that's all I want for now.

I don't see how the linear depth buffer may be implemented in a vertex shader though. I've seen a solution rendering the depth buffer into a texture and using the depth buffer texture in the fragment shader. Maybe my memory of this wrong.

Anyway, thanks a lot!

mhagain
01-09-2011, 06:10 AM
There's HLSL code for it here (http://www.mvps.org/directx/articles/linear_z/linearz.htm) which should easily translate to GLSL.

ugluk
01-09-2011, 06:21 AM
Maybe I'm not informed correctly, but doesn't the GL rely on the transformed z value to be inverse, so that it can interpolate vertex attributes correctly? Will GL know how to sample my textures correctly (there might be other attributes) if I linearize z in my vertex shader? I thought the perspective-correct attribute interpolation was the whole point of the inverse z. The books say GL interpolates the inverse z, then inverts the interpolated value and multiplies that to the interpolated attribute value divided by z. So if I have an interpolated attribute c/z:

c'/z' and interpolated z' (instead of 1/(z')), which gets inverted before the multiplication, I might get c'/(z')^2, instead of c', usually the inverted interpolated 1/z' would cancel the z' in the denominator.

ugluk
01-09-2011, 07:12 AM
To answer myself, apparently so. It does not care about z at all for interpolation, just w, z is used for depth. Thanks again.