James W. Walker

07-18-2006, 05:01 PM

The man page for glPolygonOffset says "The value of the offset is factor*DZ + r*units, where DZ is a measurement of the change in depth relative to the screen area of the polygon, and r is the smallest value that is guaranteed to produce a resolvable offset for a given implementation." If the depth buffer has B bits, and the values go from 0 to 2^B - 1, is there any reason why r should be anything other than 1?

What I want to do is offset polygons by a specific distance in world coordinates. If B is the bit depth of the depth buffer, n is the distance from the camera to the near plane, f is the distance from the camera to the far plane, and z is the distance from the camera to the polygons, then the depth value should be

D(z) = (2^B - 1) * f * (z - n) / (z * (f - n)).

Unless I've made a mistake, it follows that the difference in depth buffer values when the world distance varies from z to z + delta should be

D(z+delta) - D(z) = (2^B - 1) * (f/(f-n)) * (n * delta) / (z * (z + delta)).

So, I thought that's what I should pass for the second parameter of glPolygonOffset. I get reasonable-looking results when I use that formula with my GEForce card with a 24-bit depth buffer. But when I use the Apple software renderer, which reports a 32-bit depth buffer, I have to set B = 15, not B = 32, to get visually the same results. Why?

What I want to do is offset polygons by a specific distance in world coordinates. If B is the bit depth of the depth buffer, n is the distance from the camera to the near plane, f is the distance from the camera to the far plane, and z is the distance from the camera to the polygons, then the depth value should be

D(z) = (2^B - 1) * f * (z - n) / (z * (f - n)).

Unless I've made a mistake, it follows that the difference in depth buffer values when the world distance varies from z to z + delta should be

D(z+delta) - D(z) = (2^B - 1) * (f/(f-n)) * (n * delta) / (z * (z + delta)).

So, I thought that's what I should pass for the second parameter of glPolygonOffset. I get reasonable-looking results when I use that formula with my GEForce card with a 24-bit depth buffer. But when I use the Apple software renderer, which reports a 32-bit depth buffer, I have to set B = 15, not B = 32, to get visually the same results. Why?