PDA

View Full Version : glPolygonOffset units



James W. Walker
07-18-2006, 04:01 PM
The man page for glPolygonOffset says "The value of the offset is factor*DZ + r*units, where DZ is a measurement of the change in depth relative to the screen area of the polygon, and r is the smallest value that is guaranteed to produce a resolvable offset for a given implementation." If the depth buffer has B bits, and the values go from 0 to 2^B - 1, is there any reason why r should be anything other than 1?

What I want to do is offset polygons by a specific distance in world coordinates. If B is the bit depth of the depth buffer, n is the distance from the camera to the near plane, f is the distance from the camera to the far plane, and z is the distance from the camera to the polygons, then the depth value should be

D(z) = (2^B - 1) * f * (z - n) / (z * (f - n)).

Unless I've made a mistake, it follows that the difference in depth buffer values when the world distance varies from z to z + delta should be

D(z+delta) - D(z) = (2^B - 1) * (f/(f-n)) * (n * delta) / (z * (z + delta)).

So, I thought that's what I should pass for the second parameter of glPolygonOffset. I get reasonable-looking results when I use that formula with my GEForce card with a 24-bit depth buffer. But when I use the Apple software renderer, which reports a 32-bit depth buffer, I have to set B = 15, not B = 32, to get visually the same results. Why?

Madoc
07-19-2006, 02:23 AM
I've read somewhere that the behaviour of polygon offset is implementation dependant (not just depth buffer precision) and you can't expect it to behave consistently.

I can't remember now what it was but polygonoffset was causing us some serious grief of another kind on some cards and so I ended up using fragment programs or (in lack of them)modified projection matrices instead. But you can't account for slope by modifying the projection matrix, we didn't want this anyway but it sounds like you don't either.

James W. Walker
07-19-2006, 11:51 AM
Modifying the projection matrix sounds like a good idea. I'll try it. Thanks!

tamlin
07-20-2006, 02:32 PM
Madoc: It can't be implementation dependant (for a correct implementation) as I see it. It's specified in no uncertain terms how it's (to be) done.

That said, I have myself experienced issues that makes me thing ATI have (at least in some drivers) simply swapped the meaning of the arguments, and in fact rendered the whole defined meaning useless. I admit, it could be something I do wrong, but after that long and hard sessions of interpreting the standard and viewing what ATI produced, I'm still almost 100% sure they are at least wrong - the question remaining to me is only "By how much".

Madoc
07-21-2006, 03:37 AM
Oh, I'm not saying the spec allows it, it's just that apparently in practice it is. I was kind of restating what I'd read but I guess a better way to put it is that there are a lot of persistently broken drivers. I think the main complaint was about the units not being consistent across implementations. I've seen other behaviours that can't fit in with the spec either though. But whatever caused us to scrap it was completely different, I think it might have been that enabling it broke something else.

Don't Disturb
07-21-2006, 08:59 AM
From the version 2.0 spec:
The minimum resolvable difference r is an implementation constant.

Madoc
07-22-2006, 02:57 AM
Ah. That's explained then. I've never read that version of the spec, I read some extension spec where there's no mention of implementation dependant values.

Cheers