Nvidia GeForce3-4 driver problem!

I apologize if I’m OT.

Recently was installed a new release of RedHat in my Linux machine, and the driver for the GeForce3 was updated to the newest one.

Now, when I run my program, I obtain uncorrect visualization if I use blending.

I’m sure it’s not a code problem because I have GeForce2 in my laptop and the visualization of the same program is correct!

Two images that explain the kind of problem I have are in this page: http://digilander.libero.it/SimonaTassinari/nvidia.htm

Anybody has an idea about how to solve the problem?
I checked that GeForce4 driver has the same problem, but I don’t know what was the correct driver version!

Thank you all,
Remedios

It’s difficult to tell what is wrong from your images, since it’s all fairly abstract. The differences in the images could be explained by the lack of depth buffering on the second ‘wrong’ image.

This may differ between cards because of the way you choose a visual and the available visuals on the graphics cards you are testing on. Try a depth buffer size of “1” in your attributes list, that should give you the highest available given your other choices.

I looked at the images, and there’s not a whole lot to go on.

Along the lines of dorbie’s suggestion, the Microsoft ChoosePixelFormat implementation has a bug where if you ask for a 32-bit depth buffer on a GeForce4, you will actually get a 16-bit depth buffer. (GeForce4 supports both 16- and 24-bit depth buffers).

I have “similar” GeForce driver problems too, occurring when using display lists containing tesselated polygons AND

  • lighting = disabled AND
  • clipping planes used

(see my topic “GeForce3-Driver Bug ? since 4072 when Clipping and Tesselation” from 01-10-2003).

You should try to check/change these conditions and look if the bug still occurres.

If you want I can send you a small sample application.

Ravo

Originally posted by pbrown:
[b]I looked at the images, and there’s not a whole lot to go on.

Along the lines of dorbie’s suggestion, the Microsoft ChoosePixelFormat implementation has a bug where if you ask for a 32-bit depth buffer on a GeForce4, you will actually get a 16-bit depth buffer. (GeForce4 supports both 16- and 24-bit depth buffers).[/b]

Pbrown, why is it a microsoft bug if it only happens with a certain nvidia chipset?

Because this specific nvidia chipset supports more depth buffer sizes than previous chipsets. The error lies in application writers not checking the size of the returned depth buffer. And assuming asking for 32bits will just work.

On the GF4 if you ask for 32bit Z, you will get 16, not 24. This is a bug in the M$ OS not picking the nearest size Z buffer, but it can be avoided if ppl wrote their applications properly in the 1st place.

Nutty

On the GF4 if you ask for 32bit Z, you will get 16, not 24. This is a bug in the M$ OS not picking the nearest size Z buffer, but it can be avoided if ppl wrote their applications properly in the 1st place.

The MS OS picks the right one on earlier chipsets. Is this explainable?

On old NVIDIA chips, you couldn’t mix and match color and depth buffers. So if you had 16-bit color, you could ONLY have 16-bit Z. Similarly, if you had 32-bit RGBA color, you could only have 24-bit Z + 8-bit stencil. On some other NVIDIA chips, we chose not to export 16-bit Z buffers because there weren’t any signficant advantages, even in performance. I’m not sure if there’s any difference on Quadro vs. GeForce.

In the ChoosePixelFormat interface on Windows, the MS DLL makes a bunch of calls to the driver to get info on the pixel formats it supports. After getting that information from us, it chooses a format.

I don’t think there’s anything wrong with apps asking for 32-bit Z per se – a reasonable response from ChoosePixelFormat would include (a) no matching formats because of no support of 32-bit Z or (b) picking 24-bit Z as the most appropriate alternative. If an app asked for 32-bit Z and ChoosePixelFormat were to fail, I think that it would need to handle that fallback. I can’t remember how GLX worked (last GLX work was with IBM in 1998), but I think its ChoosePixelFormat might return a failure in this case.

Unfortunately, the API is not that well-specified, and the Windows ChoosePixelFormat call has an issue where it obviously chooses the wrong format.

Understood. Thanks for the explaination.