Clipping diffs on ATI & NVIDIA

My original problem is that I want to draw a 2D section of a 3D object (e.g. drawing a 2D section of a sphere would result in a circle).

The implementation that we currently have uses two coplanar clipping planes, separated by a very small distance. Since most polygons end up perpendicular to the view plane, they are often invisible if polygon mode is set to GL_FILL. On NVIDIA cards this is easily solved by setting polygon mode to GL_LINE, since the GL driver generates new vertices and hence nice visible lines that lie in the clipping planes.

On ATI cards, however, using polygon mode GL_LINE results in sparse line segements and points. The only conclusion I can make from this is that ATI performs clipping on a per pixel level, meaning that new vertices are not generated. Is this correct behaviour wrt the GL spec? Citing the OpenGL 1.5 spec, page 50:

Polygon
clipping may cause polygon edges to be clipped, but because polygon connectivity
must be maintained, these clipped edges are connected by new edges that lie along
the clip volume’s boundary.

I take it that “polygon” here means the primitive kind (point, line or polygon), and not the polygon mode (i.e. if we use polygon mode = GL_LINE, it’s still a polygon, not a set of lines).

Is the ATI implementation wrong?

Are there any other solutions for my original problem?

[This message has been edited by marcus256 (edited 01-13-2004).]

Originally posted by marcus256:
[b]My original problem is that I want to draw a 2D section of a 3D object (e.g. drawing a 2D section of a sphere would result in a circle).

The implementation that we currently have uses two coplanar clipping planes, separated by a very small distance.

[…]

On ATI cards, however, using polygon mode GL_LINE results in sparse line segements and points. The only conclusion I can make from this is that ATI performs clipping on a per pixel level, meaning that new vertices are not generated. Is this correct behaviour wrt the GL spec? Citing the OpenGL 1.5 spec, page 50:

[…]

Is the ATI implementation wrong?

Are there any other solutions for my original problem?
[/b]

Regarding your original problem, why don’t you draw your cutting plane as a quad in the zbuffer and then render the geometry in filled mode with the depth test set to GL_EQUALS? (it’s not clear to me whether you are just interested on the edges or also on the surface).

Regarding the clipping implementation, for regular frustum clipping there’s something called guard-band clipping which is a larger-than cliprect rectangle so any triangle fully falling into that rectangle won’t be clipped. That method prevents generating extra polygons at clipping time (diminishes vertex setup & transformation cost) at the cost of rasterising regions of the polygon you are going to discard because of falling out of the cliprect.

NVIDIA used to do per pixel clipping for user-clip planes by using texgen, a texture and alpha testing. That sounds like the behaviour you are getting, strange that it’s on an ATI card, though.

Both clipping tricks are OpenGL conformant as long as

  • enabling clip-planes is orthogonal with the rest of the OpenGL state, i.e. you can still access all texture stages and so on … even if it means falling back to software rendering.
  • they abide to the invariance rules for multipass rendering.

Regarding your original problem, why don’t you draw your cutting plane as a quad in the zbuffer and then render the geometry in filled mode with the depth test set to GL_EQUALS? (it’s not clear to me whether you are just interested on the edges or also on the surface).

That sounds like a good solution. It will probably work well with the application for which I am experiencing trouble right now. Thanks! Will see if I can fit it into the current GL wrapper.

[This message has been edited by marcus256 (edited 01-13-2004).]