polygon rasterization and depth test

According to the specification, the depth of a fragment is the weighted (by baricentric coordinates) avrage of the depth of the three points that define a triangle.

The problem I see with this is in the case when you are rendering 2 triangles that share an edge, and both lie in the same halfplane of the edge (with no backface culling). Any fragments that lie exactly on the edge (this is deffinatly possible according to the spec) will then be generated by both triangles and occasinally when the triangles are rendered in a certain order, some pixels from the triangle further away may apear on the edge of the triangle closer to you.

My question is, how would I take care of this problem in my implementation of opengl? When I try to recreate this case on hardware, it looks correct and you never get the stray pixels on the edge. As far as i understand the spec, when 2 triangles share an edge, and are on seperate halfplanes of the edge, then no fragment gets produced more then once, but it I can’t figure out what to do in the case I described.

If you think I’m not being clear, please tell me, I’ll try to elaborate, thanks.

If you are using OpenGL to render triangles there will not be any overlapping pixels on shared edges as there is a rule which makes the rasterization algorithm only fill the pixels once.

If you are writing your own software renderer you would have to obey a “top left fill convention” so that no pixels are drawn twice. This actually has nothing to do with depth at all.

[ www.trenki.net | vector_math (3d math library) | software renderer ]

The thing is though, The top-left convention alone, doesn’t take care of the case when your edge is on the sillohuet of a mesh. The edge would be either the top-left in both triangles (the front-facing triangle and the back-facing one) or in neither triangle.

So if your rasterizer only produces fragments that fall on an edge if they are on the top left edge, then if you render a mesh, you will have some fragments that fall on the to left of the sillohuet produced 2 times, and depending on your depth-test and redering oreder, you will have some fragments from the back-facing triangle appear on the sillohuet.

Somehow my video card drivers take care of this so those artefacts dont appear (even without back-face culling), and my question is: how I would do the same if i was implementing a rasterizer?

I think I understand now what you mean and I believe you don’t have to specifically handle this case at all. The depth test will take care of this. Using GL_LESS for the depth test would make only the fragments of the triangle which you draw first to be visible. Using GL_LEQUAL could theoretically produce artifacts but the gfx card uses enough bits of subpixel precision to discriminate the fragments.

[ www.trenki.net | vector_math (3d math library) | software renderer ]

That’s simple: not at all. It’s a type of Z fighting, and it’s the application’s responsibility to fix it.

Using GL_LEQUAL you’ll get the fragments of the triangle drawn last. Either mode could be right or wrong, depending on the triangle order. Increased subpixel precision and depth precision helps minimizing the problem (so does multisampling). But if the shared edge lies exactly on the sample points no amount of subpixel precision will fix it.