z-buffer and GeForce 2

I just bought my new Geforce 2 (MX400 64MB) and I found out that it doesnot support 32bit z-buffer , it uses only 24bit. Also if i set 16bit color depth then only 16bit z-buffer is available. Is this normal? My old TNT card could use 32bit zbuffer but only if the color buffer is also 32bit. I am using the choosepixelformat method to select the depth of my buffers.

The GeForce series only supports two pixel formats. 16 bit color with 16 bit depth, and 32 bit color with 24 bit depth.

I doubt TNT was able to do 32 bit Z-buffer. I had one earlier, and I don’t think it could. Only 24 bit there aswell.

Even so called ‘high end’ workstations only have a 24 bit depth buffer.

… but both Matrox and ATi cards have supported 32bit Zbuffer for quite some time now. Whether 32bit Z is very useful is another question though.

I’n pretty sure neither Matrox or ATI support 32bit Z.

They support only 24bit Z + 8bit stencil. Or 24bit Z with 8 bits being wasted on nothing

I get a 32 bit Zbuffer on my GF3 with the following code.

// Get the maximum Z-Buffer available
glutInitDisplayString(“rgb depth=32 double”);
if (!glutGet(GLUT_DISPLAY_MODE_POSSIBLE))
{
glutInitDisplayString(“rgb depth=24 double”);
}
if (!glutGet(GLUT_DISPLAY_MODE_POSSIBLE))
{
glutInitDisplayString(“rgb depth=16 double”);
}

However replacing rgb with rgba gives me a 24 bit Zbuffer.

Originally posted by Jurjen Katsman:
[b]I’n pretty sure neither Matrox or ATI support 32bit Z.

They support only 24bit Z + 8bit stencil. Or 24bit Z with 8 bits being wasted on nothing [/b]

They support 32bit Z. There is an option to enable it in the display settings . It’s has been supported at least since Rage 128.
I had a similar option on my old G400 too.

[This message has been edited by Humus (edited 01-15-2002).]

Thank you all! My main concern was if I could get a 24bit zbuffer with a 16bit color depth, not if i can use 32bit zbuffer.

Now, there’s another question:

If i have 32bit color buffer then RGBA gets 8bits each.
If i have 16bit color buffer , what happens to the alpha buffer? Is it additional 8bit to 16 of the RGB (5,6,5)? Or are the four color components stored in only 16bits?

32 bit depth buffering is certainly useful. 24 bit precision has to be spent wisely. This is a huge advantage for all sorts of reasons. You get 256 times the resolution in z. Handy if the rest of the pipeline transforms with sufficient accuracy and sub pixel position, and extremely useful if your near clip is very near the eye and the far clip is a long way off in the distance.

There are other benefits, like allowing a larger polygon offset for decals and multipass effects, while avoiding ‘punchthrough’ artifacts making it much easier to avoid punchthrough.

Also when the depth test generates an edge, you are much less likely to get a saw tooth effect, and would probably never see this without pathological near and far clip values.

If i have 16bit color buffer , what happens to the alpha buffer? Is it additional 8bit to 16 of the RGB (5,6,5)? Or are the four color components stored in only 16bits?

In 16 bit color depth, you probbaly don’t have any alpha bits at all in the framebuffer.

32-bit Z is problematic because floats only have 23 bits of mantissa. Unless you are willing to pay for extra precision everywhere in the pipeline, you can get bizarre behavior; for example, you might find that as you tessellate geometry more densely, its Z precision becomes significantly worse.

Z buffers need their precision most in the high ranges – Z from [0.5,1] contains a rather large fraction of the scene. In that interval, floats can represent values that are 2^-24 apart. So, a 24-bit Z buffer matches the precision of IEEE floats quite nicely.

The naive implementation of 32-bit Z helps very little in many circumstances.

There’s a world of a difference between supporting 32 bit Z as a pixel format and supporting it in a way that is actually useful.

And for most applications, if you can’t get by with 24 bits, you probably aren’t using them very effectively.

  • Matt