@NVIDIA: GF4 Z-Buffer problem in OGL?

Hi @all the NV guys here,

I heard from quite a few people, that they seem to have some sort of Z-Buffer problem (Z-fighting) in for example Serious Sam 2, while using 32 Bit color depth.
Their guess is, that the Z-Buffer is “broken” or that there´s only a 16 Bit Z-Buffer used (that was my guess, too).
Could it be, that if an application coder requests a 32 Bit Z-Buffer on NV hardware, he get´s a 16 Bit one instead of a better fitting 24 Bit one.
Oh and this seems to be only a problem on GF4 class hardware.
My guess ist, that there might be an error in NVs ChoosePixelFormat algo.
And I think if the WGL_ARB_pixel_format extension would be used, the request for an 32 Bit Z-Buffer would generate an error and then the application coder could have seen, that he is using a 16 Bit Z-Buffer only.

Not a big thing, if this Bug really exists, or?
I can´t confirm it for myself, because I use a GF3, which doesn´t have this problem.

Could some NV guy look into this or have other people here similar experiences?

Regards,
Diapolo

If you ask for a 32bit Z buffer in windows, and one isn’t available, it’ll give you a 16bit one instead of a 24bit one.

Nvidia have never support 32bit Zbuffers, and the difference you’ll get between 24bit and 32, is minimal. It’s really a bug in windows really, or the application.

You should check to see if 32bit Z buffer is available, if not, request 24. Not leave it upto windows to give you something it thinks will do.

Nutty

The odd part about this is that my programs (or, at least, the last ones I tested in my pre-GeForce4 days) didn’t have a problem with degrading from a 32-bit z-buffer to a 24-bit one. If you asked for 32-bit, the driver would happily give you a 24-bit one. Later in my programming, I specifically request 24-bit Z with 8-bit stencil (for the stencil, of course), so I don’t have a problem with newer software.

I guess something has changed in recent drivers. I’d get into the habit of just requesting the 24/8 buffer instead of the 32.

You are both right.
I would request a 24 Bit Z-Buffer with a 8 Bit Stencil Buffer, too.
But it seems that quite a few serious game coders out there didn´t do this and now have a 16 Bit Z-Buffer, where they wanted a 32 Bit Z-Buffer (and could get a 24 Bit Z-Buffer).
I think one could blame the WGL ChoosePixelFormat function for that, because it defaults to a 16 Bit Z-Buffer on GF4, if a 32 Bit Z-Buffer is requested (but not available).
And another thing is, that I would always use the WGL_ARB_pixel_format extension.

But what IS strange, is the fact, that this seems to happen only on GF4 based cards.

So perhaps NV should look into this one and implement a work-arround or a fix for that.

I read in a forum, that the customers, that only PLAY the games, get really angry if they see the Z-fighting on GF4 but not on ATI cards.

Diapolo

If the app doesn’t ask for 24 bits of Z, it won’t get 24 bits of Z. I don’t see how the SS2 issue is anything other than an app bug. Since 16 bits of Z can be faster, we’d rather give an app less bits if it doesn’t think it needs those bits.

  • Matt

Thanks for your reply Matt .

I think the App should properly check, that the Z-Buffer depth for the choosen PF is equal to the requested Z-Buffer depth and perhaps in SS2 this check isn´t there.
So from that point of view it´s the Apps fault.

You say in other words, that the NV OGL driver defaults to the lowes available Z-Buffer depth, if the requested Z-Buffer depth isn´t available.
32 Bits -> 16 Bits
24 Bits -> 24 Bits
16 Bits -> 16 Bits

But then, why does this only happen this way on GF4 class hw and not on GF3 (like I said, I only tell about other users experiences)?

Any idea?

Diapolo

Originally posted by mcraighead:
[b]If the app doesn’t ask for 24 bits of Z, it won’t get 24 bits of Z. I don’t see how the SS2 issue is anything other than an app bug. Since 16 bits of Z can be faster, we’d rather give an app less bits if it doesn’t think it needs those bits.

  • Matt[/b]

That’s bs… If I ask for 32bit I’m obviously interested in the precision and not the speed.

Never forget the wonders of ChoosePixelformat. You can’t rely on anything when using that.
Long story short, just do your own pixelformat evaluation with DescribePixelformat or the wgl-extensions to be sure you’re getting what your asking for.

That’s bs… If I ask for 32bit I’m obviously interested in the precision and not the speed.

If you ask for 32bit, and dont bother checking what you actually get, then you can’t be that bothered really, can you?

If you dont get 32bit, then ask for 24bit, if you dont get that, then try with 8bit stencil, if you still dont get it, then resort to 16bit.

It’s not exactly rocket science now is it?

Nutty

Originally posted by Nutty:
[b] If you ask for 32bit, and dont bother checking what you actually get, then you can’t be that bothered really, can you?

If you dont get 32bit, then ask for 24bit, if you dont get that, then try with 8bit stencil, if you still dont get it, then resort to 16bit.

It’s not exactly rocket science now is it?

Nutty[/b]

ya nice try… here is a quote from the msdn under ChoosePixelFormat

“If the function succeeds, the return value is a pixel format index (one-based) that is the closest match to the given pixel format descriptor.”

hmmm… ya, I think 24 is closer to 32 than 16 is…

it would be nice if the function worked properly…

The point is that one should get as close as possible to what’s requested. If the card supports 16 and 24, and the app requests 32 it doesn’t make much sense to give it 16.

Yeah I totally agree with john and humus. It does NOT make sence that if you request a 32bit Z that it gives you 16bit if the hardware can not support a 32bit Z. To me it’s very obvious that if one specifies 32bits then one wants a high precision Z-buffer.

Yeah it would be nice if ChoosePixelFormat DID work like it’s supposed to. The best thing now is to ofcourse do checks to see if what you requested can be chosen, if not then specify the next best thing yourself. But really we shouldn’t have to do that.

-SirKnight

Quick question:

Some of the older DX games would give the player a list of supported formats to choose from. Do you think this is a cool feature or too much of a hassle for the average gamer? I’ve never been able to make a decision either way…

Thanks…

John.

Btw: the from what I remember you had to choose before playing.

[This message has been edited by john_at_kbs_is (edited 10-16-2002).]

Come on, John has a point.

To make proclamations about the logic of defaulting to fast if you don’t ask for 24 bit when 32 bit has been requested doesn’t hold a lot of water.

This stuff has always been a complete mess though.

Matching pixel formats is not always straightforward. There may be some other visual property being requested that makes the 16 bit z visual a ‘closer’ match.

You all should see, the gamers point of view, too.
If one sees Serious Sam 2 on his GF4 and wonders about the “Z-Fighting artifacts” and then compares this to SS2 on an ATI Radeon card or GF3 card (where there seems to be a 32 or 24 Bit Z-Buffer), then he thinks his GF4 is broken or the drivers do SUCK (which is bad for NVs financials and credit from it´s customers).

If this is really the point, then NV should change the default behavior of the PF choosing algo, so that a high precision Z-Buffer is choosen, if one is requested (if 32 or 24 Bits doesn´t matter, only “high” precision).

Are there observations from you, where you saw Z-fighting in OGL games on GF4, but not on other cards, where there should not be Z-fighting?

Like I said I can´t prove it to be a GF4 only problem, but I really would like to track this “bug” / “default behavior of the driver” down and then talk about a possible solution .

Regards,
Diapolo

This is probably off topic but did anyone noticed this in PIXELFORMATDESCRIPTOR specs:

cColorBits
Specifies the number of color bitplanes in each color buffer. For RGBA pixel types, it is the size of the color buffer, excluding the alpha bitplanes. For color-index pixels, it is the size of the color-index buffer.

So according to this we should request 24 bits for RGB and 8 bits for cAlphaBits parameter to get 32 bit RGBA color, but we all use 32 for cColorBits parameter.
This is from WGL_pixel_format spec:

WGL_COLOR_BITS_ARB
The number of color bitplanes in each color buffer. For RGBA pixel types, it is the size of the color buffer, excluding the alpha bitplanes. For color-index pixels, it is the size of the color index buffer.

Same thing.
Am I reading spec right or what?

Thanks.

This is my code for your question and it works :).

iAttributes[10] = WGL_COLOR_BITS_ARB;
iAttributes[11] = GetCurColorDepth() == 32 ? 24 : 16;
iAttributes[12] = WGL_ALPHA_BITS_ARB;
iAttributes[13] = GetCurColorDepth() == 32 ? 8 : 0;
iAttributes[14] = WGL_DEPTH_BITS_ARB;
iAttributes[15] = GetCurColorDepth() == 32 ? 24 : 16;
iAttributes[16] = WGL_STENCIL_BITS_ARB;
iAttributes[17] = GetCurColorDepth() == 32 ? 8 : 0;

Diapolo

Ya, I read it the same way. I’m not near my dev pc, but I swear I get 32 back for color bits. Thats strange…

If you are talking about:
glGetIntegerv(GL_INDEX_BITS, &iIndexBits); I guess you get 32, because of 24 Bit RGB and 8 Bit Alpha.
And because of that it´s called GL_INDEX_BITS and not GL_COLOR_BITS?

Diapolo

BTW.: Back to the real Topic :wink: … please g.

no, I don’t use color index… will look at it when I get home, see if I’m crazy (highly posible)…