View Full Version : object-level occlusion culling: nvidia vs. ATI

02-01-2003, 11:45 AM
Ok, so NVIDIA has OpenGL occlusion queries to determine if a piece of geometry will alter the ZBuffer, and thus sending a bounding box with the query on can help out in doing hardware occlusion culling.

Then, how does ATI compare to thaT? I mean, there's Hyper-Z, but from what I've seen HyperZ works on the pixel level, right? Is there something like the NV_OCCLUSION_QUERY for ATIs? They keep saying ATIs do occlusion queries as well...


02-01-2003, 12:46 PM
just use NV_occlusion_query. its supported on the ati cards.. http://www.opengl.org/discussion_boards/ubb/biggrin.gif (at least, last time i looked into my extension string on the r300, i've seen it..)

02-01-2003, 02:21 PM
Both cards have some form of coarse zbuffer, the point is it only saves pixel fill, not geometry.

This is where occlusion querry comes in. Occlusion querry allows you to perform a test to decide if you can eliminate geometry transform. It actually ADDS pixel fill overhead initially to do this in the hope that it will save some geometry (and perhsps win some or all of that fill back), and of course there's the time taken to get a response which the app must manage intelligently.

So they are VERY different approaches that solve different problems related to occlusion. One is mainly to save fragment operations and the other geometry operations.