Why is zbuffering slow?

Is zbuffering slow because it’s time consuming to calculate the zvalue of each pixel, or because your drawing lots of polygons that you won’t see anyway?

Example:
If I draw 1000 polys randomly without sorting, and then I draw 1000 polys with zbuffering. Will there be a small or large time difference?

Thanks,
wolfman8k

Could you be a little more specific, why do you think zbuffering is slow?

In hardware zbuffering is free(allthough a zbuffer doubles the memory bandwith needed, 32rgba pixels and 32bit of zbuffer for instance).

If you’re talking about software zbuffering then it’s costly, but at some point it gets slower sorting the polys.

The z-buffer is, as Blaze said, almost free on todays hardware. The only drop in speed is the extra bandwidth used. And wether or noth your polygons are sorted or not makes a very little difference. You still have to compare each pixel in each polygon, but when sorting, you don’t have to rewrite the new depthvalue that many times.

And there is abolutely NO penalty to calculate the z-value. The z-value is calculated wether you want it or not, because the operation is embedded in the projection matrix.

The z-buffer might be slow because you are using either a software renderer, or a deppthbuffer mode your hardware doesn’t support.

Z-buffering is not expensive, but not almost free either. On my G400 (which is known for having a very efficient memory subsystem) I get a framerate drop from about 160 to 145 when I enable z-buffering in my 3d engine.

Maybe that’s true Humus, but imagine what your frame rate would drop to if that z buffer was not hardware accelerated at all. So relatively speaking, its essentially free if it is hardware accelerated.
But yes, you should still turn it off when it is not needed so you can save that bandwidth for other things.

Sorting the polygons from front to back could give a little boost, because the polys further down in the z axis will be rejected sooner because the depth test will fail, minimizing overdraw. In some cards it does make sense to sort from front to back (mainly, those with crappy fill rate), but today it’s just a waste of time sorting the polys

In software zbuffers are pretty slow because you have to do several operations per pixel…
There are several techniques in software that are more efficient.
On hardware however, as far as i know, nothing beats the zbuffer (as long as overdraw is kept to a minimum) simply because on hardware it can be done quickly, but most of all it’s probably the best algorithm to use with parallel processing (that you’re calculating multiple pixels at the same time)

Ofcourse, the fastest thing there is is not to draw what you can’t see instead of using any visibiliy algo like z-buffer…

But yet again, these days it’s faster to draw more than you can see and not to try to figure out zero-overdraw simply because all the calculations necesarry on the cpu to get zero-overdraw are more expensive than it is for the 3d card to actually draw all those extra polygons (and polygon parts)

It’s all basically about balance…

Well, I wouldnt say that sorting is a waste by any means, even on hardware with great fill rates. A lot of the newest cards are bandwidth limited. By drawing in front to back order, you can save a LOT of bandwidth through quick z rejection. Am I saying you should draw in pure front to back (ie: sort every polygon)? No. But drawing ROUGHLY front to back helps tremendously.

Using a portal system or an octree can help you break your geometry into chunks which (by virtue of the design) can easily be sorted front to back. Now maybe the polys in each chunk wont be sorted, but you get reasonably close to the full benefit for very little work.