Texture Tearing ... Two planes

Just wondering if there’s an ‘easy’ solution to this one … I’ve got two planes both parallel and when one is placed in front of the other I get ‘texture tearing’ due to the close proximity of the two planes and OpenGL’s depth resolution limitation. Apart from the obvious … like moving the planes apart … is there another approach that I could take to ensure that the upper plane is rendered corectly ?

Thanks

Andrew

glPolygonOffset

What are you setting your near and far planes as?

Near and Far planes are 5 and 1000 … kind of stuck with those … I’m trying to produce a book with turning pages and as the page comes in contact (or close contact) then I obviously get the problem … mind you, after a good night’s sleep I solved it … turning depth testing off for the page as it approaches contact stops the problem !

Thanks

Andrew

Obviously you’ve found a solution but I am just curious as to what size of depth buffer you have requested/retrieved.

Dorbie … Thanks, not used glPolygonOffset before … it works well too … having a slight problem understanding the precise meaning of its parameters but there’s lots of stuff on Google.

I assume that I do need to preserve a slight difference between the two planes ? … which I don’t if I disable depth testing … which is better ? …

Andrew

If you use offset you don’t need a slight difference, the surfaces can be coplanar to the limit of rounding.

The meaning changes between the EXT and the ARB non EXT call so be careful what you read, it is slightly confusing and the conformance test is lax so implementors even confuse the meaning.

One parameter is an absolute offset in z while the other is an offset multiplier for the derivitive of z between pixels. The idea being that 1 should guarantee an offset that avoids interference (with the ARB core spec version (non EXT)).

An offset of (1.0, 1.0); should absolutely guarantee that you don’t get z fighting, while avoiding excessive ‘punchthrough’. ‘Punchthrough’ is caused by excessive offset where the coplanar surface being offset penetrates a nearer object when it shouldn’t. In practice I’d set this to something slightly higher. Unfortunately implementation quality varies and the conformance test does something like an offset of 4 with no punchthrough test and no slope on the polygon which is orthonally aligned to the viewer. It’s the saddest excuse of a test ever written and alas at least one vendor wants to keep it that way because they pass it easily.

P.S.
Offset is/was one of the most confused OpenGL calls across implementations because of the change in the parameter meanings between EXT and core and the conformance test.

I dunno what the quality is like on ‘decent’ modern graphics cards. What should be a sweet little function has been confused a bit, but it may have improved by now, heck it may even work on your card.

From the manual:

glPolygonOffset(GLfloat factor,
GLfloat units)

<snip>

The value of the offset is factor * DZ + r * units, where DZ is a measurement of the change in depth relative to the screen area of the polygon, and r is the smallest value which is guaranteed to produce a resolveable offset for a given implementation

From the spec:
http://www.opengl.org/developers/documen…000000000000000

factor scales the maximum depth slope of the polygon, and units scales an implementation dependent constant that relates to the usable resolution of the depth buffer. The resulting values are summed to produce the polygon offset value. Both factor and units may be either positive or negative.

Equations follow for the math to compute DZ for any point on a polygon. Based on those I think DZ should be the abs derivitive of z between pixels at each point on the polygon.

It could be argued that something like a 0.5, 1 should work for offset, it really depends on subpixel precision and rounding, maybe you need something like a (1.0, 1.0) or even more for factor but of course with DZ being what it is increasing it is risky w.r.t. punchthrough on polygons that slope w.r.t. the viewer.

I’m not so sure about that last part when I think about the non linear nature of z, it shouldn’t be that risky (for an implementor), and getting the constant right (as per spec) is tricky.

[This message has been edited by dorbie (edited 06-20-2003).]

dorbie, your posts were correct and good 'n stuff, but I think you got one thing wrong

Originally posted by dorbie:
the non linear nature of z

Window space z is perfectly linear (as in ‘linear math’).

Perspective divide doesn’t change that. Non-linearishness applies only to interpolated attributes (colors, texcoords, fog coord).

You might argue that typical z buffer contents show areas of higher and lower ‘depth densities’, but that alone doesn’t make it non-linear over faces.

Otherwise very well explained indeed.

I honestly meant the non linear distribution of z as distance from the eye. It’s this that makes it linear in screen space. The derivitive of z multiplied by “factor” is therefore constant across the polygon for all pixels and is therefore ‘easy’ for an implementor with a classic z buffer. This is what I was trying to say, I could be wrong, but I’m pretty sure this is how it pans out.

I find this counter intuitive because I would have expected the ‘factor’ portion to cause punchthrough and interpolation problems (hence I called it risky), but when you consider z interpolation it probably doesn’t (hence I appended the non-linear z comment). You’re right in what you say and I should have been clearer.

Considering ‘units’ again (the constant offset) I don’t think it should present great difficulty either. The spec implies (IMHO) differences at front and back, but clearly there shouldn’t be in z.

[This message has been edited by dorbie (edited 06-20-2003).]

Not certain about this, but I’ve heard that SGI machines have non-linear z and most commodity graphics cards have linear z. Someone please correct me if this is wrong. It’s good to know this sort of thing when you’re looking at z-fighting problems…

You’re probably thinking of w buffer support.

SGI actually exposed the classic non-linear z (linear in x,y screen space) but the internal representation on some systems is ‘compressed’ where compressed means a cast to some funky pseudo floating point representation prior to the depth test. Basically a redistribution of precision prior to the compare & store. You might think of this as a LUT from higher precision interpolators to lower precision storage. There was more than one scheme for this on different platforms.

I don’t think OpenGL specifies that you must implement a z buffer, just a ‘depth’ buffer, although you have to conform to specific math representation when you read it back or write it directly with a conventional visual (basically a classic Z AFAIK). For example I know of one platform where the projection matrix is used on depth read & write operations to remain ‘compliant’ of course this isn’t really compliant at all, but nobody would notice unless they mess with projection between draw and read which should be pretty rare.

However, by definition z means one thing and without extensions etc AFAIK it should look like a z buffer, what the hardware actually does internally may be inscrutable, and they’ll never tell it to you straight (various I.P. concerns). So ultimately it may be pointless to figure out what the hardware actually does or claims to do for the purposes of offset precision, especially in future :-/.

dorbie,
I see it was just a misunderstanding. Thanks for clearing that up.

Originally posted by mogumbo:
Not certain about this, but I’ve heard that SGI machines have non-linear z and most commodity graphics cards have linear z. Someone please correct me if this is wrong. It’s good to know this sort of thing when you’re looking at z-fighting problems…
This is also true, on the implementation side of things.
Ie, though your z values interpolate in a linear fashion, they may be stored to a non-linear buffer. Calculation and storage are two seperate steps, and it’s easy to confuse them.
This essentially boils down to fixed point math vs floating point math. Fixed point storage of a given precision will guarantee that you’re at most off by an absolute delta. Floating point guarantees that you’re at most off by a relative percentage.

Eg, in fixed point 995 may become 990 or 1000 and at the same time 5 may become zero or 10 (max errors of +/- 5).
In floating point 995 may become 990 or 1000 as well, but then you should expect that 5 will end up anywhere between 4.95 or 5.05. (fuzzy “1% off” math applied; feel free to slap me on the details).

If you calculate in fixed point and store in floats … you probably get the worst of both worlds

In all seriosness, I don’t think that the justification for FP depth buffers as presented in that one specific extension spec (I remember a discussion about it just a few weeks back) is realistic. Depending on circumstance, one or the other solution is better, in fact I’m pretty sure I could whip up a little test that would show float depth buffers to be inferior (keyword: intersecting triangles).

If we look at the z values coming down the pipe in a perspective view, we’ll see that usually the front of the frustum has better effective depth resolution (because w is smaller there, hence z/w will yield greater ranges of values that are easier to distinguish). If your objects are evenly distributed in the non-perspective corrected frustum, you’ll end up with a denser z population towards the back of the frustum.

So one could indeed say that z acts as if it were of non-linear resolution wrt to scene distribution.

These things all work together while partly contradicing each other. The bird’s eye view is probably easier to understand than the inner workings, as is the general recommendation to “Push your near plane out as far as possible”. In these cases I often feel tempted to announce “It’s magic!”

Z is non linear in depth by definition IMHO, sorry if I didn’t make that clear. Z as discussed in “z buffer” means something specific and it means something non linear (linear in screen x,y).

Float classic z would be silly IMHO unless it was stored as 1-z and cast after the subtraction. Either that or unless it stored z at full precsion (at least at the far end of the scale, this has entirely different but appropriate motivation). The extension discussed earlier in that other thread misunderstood or misstated this I think (hope). The idea of such an fp implementation (1-z) would be to counterract the non linear precision with float representation running the other way. Not that I’m advocating it, I’m just trying to understand what that other spec was trying to propose :-). This is all a sidebar of course and only loosely related to the post.

There’s no excuse for concocting a depth buffering scheme that gives you significantly more precision farther from the viewer than nearer. Someone trying to do this is an idiot and shouldn’t be writing depth buffering extension specs, sorry. I just hope the preamble in that spec just read badly and the intent was as I have stated, or that there’s something else that’s not made clear (I doubt it with that little boat in the lake preamble).

FWIW I agree you can easily end up with the worst of both worlds, it all sounds good until you try to wrap your head around what actually happens to the bits.

[This message has been edited by dorbie (edited 06-20-2003).]