Linux 2.6 nVidia's Z-buffer

Hi,

I have a program which draws a 3D scene (school work) and I have two machines (desktop [AMD] and laptop [iBook]) both with Debian Sid.

When I compile and test the EXACT same code everything works fine in the laptop (ATI Radeon M6, with dri-trunk) but I don’t display correctly in the desktop machine (nVidia GeForce 2MX-400).

If I’m not mistaken, there must be something wrong with the Depth test because if I switch the object drawing order the last object is displayed over the others before (wether the last one is further or nearer).

The worst thing is that I can’t locate the guilty element: Is it the nVidia driver? Is it the patch for the nVidia driver (for 2.6 kernels)? Is it the MESA lib? The X Server? The Hardware? The Kernel? Where is the z-buffer controlled?

P.S: And to make it even more funny, there’s no problem at all with 3D games such as Quake3.

If you want more information, please ask.

Thanks for your time.

The difference comes from the fact that if you don’t explicitly specify the z-buffer precision (16 or 24 bits), the driver decides. Probably on nvidia you get 16 and 24 on ati by default.

To specify that, is has to do with glx stuff, but I am not a X programmiung expert. Or you can specify it trough glut parameters too.

Originally posted by ZbuffeR:
Or you can specify it trough glut parameters too.

What’s the function to do that?

Thanks

Originally posted by C2H5OH:
What’s the function to do that?

I found it. It’s done by providing a flag in glutInitDisplayMode() or glutInitDisplayString()

Thanks for the advice. Now it works :smiley:

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.