GLSL equivalent of glPolygonOffset

I’ve seen many variants of GLSL shader samples that simulate FFP rendering, but none that covered polygon offset.

According to the spec/manual, polygon offset is a fragment operation, not something to involve vertex displacement. But many vendor implementation seemed to do some vertex magic (leading to inconsistencies between vendors), I guess because it was computationally cheaper or the hardware didn’t otherwise support such. NV drivers for example seem to apply the offset to vertices, as certain slope settings can cause gaps to appear between triangles:


(poly offset used here for shadow bias)

So, as I understand it, there is no good way to produce glPolygonOffset as outlined in the spec. Modifying fragment depth (as per glPolygonOffset spec/manual) is said to disable early-Z advantages when done in fragment shader, though vendors might apply some under-the-hood trickery to still keep early-z functional for glPolygonOffset.

What’s the best way to achieve something similar to glPolygonOffset with GLSL?

Why don’t you want to use glPolygonOffset?

All Nvidia hardware performs polygon offset at the fragment level during rasterization. It does not apply the offset to the vertices because that would result in inconsistent offsets after projection, and that’s not how polygon offset is supposed to work. Both polygon offset AND modifying the depth in GLSL will disable hierarchical Z-culling unless you’re able to use the ARB_conservative_depth extension. The only alternative you have is to modify the per-vertex depths using something like a projection matrix hack and accept the variable offset at different depths. For an example, see [b]Game Programming Gems 1[/b], Section 4.1 or [b]Mathematics for 3D Game Programming and Computer Graphics, 3rd ed.[/b], Section 9.1.

Aah, thanks Eric, I now get it, and thanks for the links.

Because I was under the erroneous impression that ftransform() was required for polygon offset to work. I must have had a bug in my software that lead me to believe this, I worked around it like this:

vec4 v = gl_ModelViewMatrix * vert;
v.w -= 0.01;
gl_Position = gl_ProjectionMatrix * v;

But yes, now that I tried without that and regular poly offset…it just works, it never was a problem! :slight_smile:

Getting back to my screenshot, now I’m guessing that due to effect of poly offset (particularly slope parameter), it is digging a trench into the z-buffer (due to polygons almost completely perpendicular to the light direction), rather than causing a gap between triangles? Starting to make sense now.

That also reminds me, why did glPolygonOffset sometimes behave differently on different hardware? I distinctly remember this was a widespread issue in the old days (for example flickering bullet hole decals in Half-Life). I’m guessing it had to do with hardcoded glPolygonOffset parameters that worked for say 24-bit z-buffers, but not 16-bit? The spec/manual clearly implies the implementation should take care of that. Maybe immature drivers, or maybe because nowadays z-buffers are at least 24-bit and up, so z-fighting in general occurs less?

According to http://www.opengl.org/registry/doc/glspec42.core.20110808.pdf

p 187

The minimum resolvable difference r is an implementation-dependent parameter
that depends on the depth buffer representation. It is the smallest difference in
window coordinate z values that is guaranteed to remain distinct throughout polygon
rasterization and in the depth buffer.

so I guess it is because of the value of r that is system dependent and you have no control over it.