quality / standard conformance of GL implementations

Hi,

I would like hear some stories about the quality / stability / standard conformance of different vendors’ GL implementations. Are there any official or inofficial conformance tests or reference software rasterizers?

I often heard that NVIDIA does a good job (which i can confirm in general).

I heard that ATI has been (or is) not as good.

What about all the current Intel hw (which owns a big part of the low end market)?

– edit –

Oops, why did I forget 3dlabs, sorry guys :wink: Any stories about the wildcats?

– end edit –

I don’t want to see flames, just pure facts (with concrete examples / proofs), please.

Please spell: n o f l a m e s

I recently saw strange behaviour with nvidia’s texture upload.

The performance can change up to a factor of 4 to 10 or 100 depending on the size of a (2D)-texture.

If m x n works well, we have no guarantee to see n x m working equally well.

In general, you should choose w > h for texture uploads. Is there any explanation for this?

Other example: I saw a 1023 x 64 2D-texture being 4 times faster than 1024 x 64 when used in a simple shader. That’s funny, isn’t it?

From my experience : I never had a crash while developping with Nvidia cards (or at least I cannot remember one). I had quite often crashes developping with ATI drivers - but that might come from an incompatibility with my nforce chipset :frowning:
PS : but today’s ati AGP cards like 9550/9600 have no concurrence yet in my country at a price < 100 euros

Originally posted by tfpsly:
From my experience : I never had a crash while developping with Nvidia cards (or at least I cannot remember one).

You want to see a crash on nvidia: Just use pixel buffer objects with mapped memory. A wrong buffer size will give you an instant bluescreen (on nvidia).
:wink:

(Geforce 6800u, release 60 driver, windows xp)

My experience with Nvidia drivers: I’ve never experienced a crash (that wasn’t my fault for doing something like passing a bad vertex array pointer). The drivers have been extremely robust, especially in handling corner cases, and have almost always worked correctly the first time I’ve tried a new feature. In the very rare occurrence that I have run into a (usually obscure) bug in the driver, Nvidia has been all over it and gotten a fix to me within days.

My experience with ATI drivers: I can’t get them to stop crashing. And when they crash, boy do they crash. When your monitor turns off or your computer spontaneously reboots, you know you’re not working with the highest quality software. I have also experienced several major problems with features being implemented correctly (don’t get me started on point sprites). And when you report a bug to ATI, the response is usually something along the lines of “It’s probably you, not us,” at least until a couple of months later when they finally admit to it being a driver issue. Wait 3-4 more months and you might get a fix, and if you’re lucky that fix might actually work right.

Originally posted by mlb:
[b] [quote]Originally posted by tfpsly:
From my experience : I never had a crash while developping with Nvidia cards (or at least I cannot remember one).

You want to see a crash on nvidia: Just use pixel buffer objects with mapped memory. A wrong buffer size will give you an instant bluescreen (on nvidia).
:wink:

(Geforce 6800u, release 60 driver, windows xp)[/b][/QUOTE]Impossible : I first code under Linux and then I test my code under Windows :smiley:

ATI: Sure, ATI may have had poor drivers in the past but it’s been a long time since I’ve had much problems with valid programs not running correctly (except for one current bug that always gets in my way). I haven’t seen a single bluescreen since VPU Recover was introduced (Edit: and it has only kicked in twice) and everything usually works as expected.
Incorrect programs report OpenGL errors correctly and crash the application otherwise.

nVidia: None of the current drivers (66.93, 67.66 and 71.40) can run all of my programs without crashing.
66.93 and 67.66 reliably produce bluescreens in a perfectly valid program. 66.93 and 71.40 crashes another valid program when DualView is active. Other programs (including nVidia-supplied demos) run quietly but produce incorrect results when DualView is active.
Texture transfer speeds are unpredictable - a video player works correctly but too slowly to actually be usable.
Incorrect programs often crash instead of reporting OpenGL errors and sometimes produce bluescreens.

That VPU Recover dialog made me laugh when I first saw it…too hard to fix the bugs in the driver, but we can detect when we’ve locked up the card, and we can send it a reset signal - problem solved.
Personally, I’ve had nothing but trouble with the nvidia 6800, I’ve tried everything, the thing seems to go tits up whenever I try to create a float render target, for instance. Tits up = resets my machine.
So, my current experience is the only people with stable drivers at the moment are 3dlabs.

Originally posted by Daniel Wesslen:
nVidia: …
Texture transfer speeds are unpredictable - a video player works correctly but too slowly to actually be usable.

That’s my experience, too. Texture transfer speed is highly unpredictable. You have to measure every special case in order to make sure you get good speed. It seems that the driver does lots of weird things internally (that we cannot see or debug). E.g. lots of special cases that are workarounds for broken games or optimizations for special cases/applications. That’s the disadvantage of a closed driver/hardware architecture.


Incorrect programs often crash instead of reporting OpenGL errors and sometimes produce bluescreens.

I’ve only seen bluescreen after i’ve done something more or less evil – like specifying wrong buffer size values when using PBOs.

Originally posted by knackered:

Personally, I’ve had nothing but trouble with the nvidia 6800, I’ve tried everything, the thing seems to go tits up whenever I try to create a float render target, for instance. Tits up = resets my machine.
So, my current experience is the only people with stable drivers at the moment are 3dlabs.

I’ve used floating point render targets without any problems. (win xp, 6800u, release 65 drivers)

Can anybody else confirm good quality for 3dlabs’ drivers. I would be very interested in numbers for transfer rates. e.g. texture uploads, readback, shader performance etc.

Are there any official / public conformance tests or reference software rasterizers for GL?

3dlabs has a glsl conformance tool that used to tell lots “bugs” for nvidia’s glsl implementation.

Transfer rates are altogether better on 3DLabs (I’m talking about the Wildcat realizm 200 vs. the 6800) but I’ve had major problems getting it not to reboot my machine/hang and/or crash.

Multisampling doesn’t exist in a pbuffer context first of all - which for me, sucks.
plus I can make that card reboot my machine faster than you can say: “Trash this pbuffer and give me a new one”.

mlb, what you’re talking about is glslValidate. nVidia drivers will apparently do some auto-casting or something which allows some shaders that aren’t written correctly to run. I wouldn’t call that a driver bug, but IMO they should have just made the shader fail to compile, because they aren’t letting the developer know they’re doing something wrong. I found this out the hard way when I switched from an nVidia board to a 3DLabs one during a brief stint.

thanks, Aeluned,

nice to hear that transfer rates are better on realizm200 compared to 6800, could you give any numbers?

Unfortunately we see a very instable GL implementation, hmmmmm. Stable pbuffers would be really useful. But let’s hope we’ll see stable FBO’s on the wildcats…This would make pbuffers largely redundant.

> mlb, what you’re talking about is glslValidate. nVidia drivers will apparently do some auto-casting or something which allows some shaders that aren’t written correctly to run. I wouldn’t call that a driver bug, but IMO they should have just made the shader fail to compile, because they aren’t letting the developer know they’re doing something wrong.

You are perfectly right, invalid code should DEFINITELY fail to compile (nvidia, listen!!!), this would force developers to write conform code (which in fact is very easy, isn’t it).

>I found this out the hard way when I switched from an nVidia board to a 3DLabs one during a brief stint

:slight_smile: So you tried to switch to 3dlabs but only saw pbuffer and lots of other problems? So, I won’t have to try for myself…

Does anybody have good news about 3dlabs? (apart from the fact that the glsl compiler more strictly follows the glsl standard and that transfer rates are good (?) compared to the 6800)

Hi,

in my case i’ve been developping some demo 1,5 year ago, and i came over a crash in detonator 44.03 with WGL_ARB_render_texture, whereas the same effect worked fine on radeon mobility. i’ve tried to render the scene to the normal backbuffer, it was ok, no error code returned, but when rendered to a texture, it just crashed the drivers. weird.

i think it’s the only driver crash i encontered in my little programmer life, most others were due to me i think… :wink:

regards,

Originally posted by Aeluned:
Multisampling doesn’t exist in a pbuffer context first of all - which for me, sucks.
I haven’t tried it yet but I think that this problem was solved in the driver released last month (4.04.0677).

plus I can make that card reboot my machine faster than you can say: “Trash this pbuffer and give me a new one”.
Woow I have this card since October and I never had a crash. Before that I was using a Radeon 9800XT and it was a painful experience.

One thing that I’ve experienced is that invalid programs were still working on the Radeon while nVidia and 3DLabs are less forgiving.

> One thing that I’ve experienced is that invalid programs were still working on the Radeon while nVidia and 3DLabs are less forgiving.

Hmmm, invalid programs that are still working, I would call that voodoo. Did these programs ignore error flags or didn’t ATI’s GL detect and set the error flag? One thing I would request for any vendor is correct error behaviour – as defined in the GL specification. If code does something bad, an error status should definitely be set. If broken code works nevertheless, I don’t mind, but it SHOULD set an error. When an application ignores the error, that’s the problem of that application and not a problem of the GL implementation.

It’s the same problem as ignoring bad GLSL code. This encourages bad code and makes switching to other hardware impossible. An error is an error – we shouldn’t ignore it!

My general experience is:

Nvidia: Stableish but don’t use them for nuclear reactor guis. (i.e. I had them crash) But extensions are generally reliable even if you use them in weird combinations not many people may have used before.

ATI: Develop on this board so you can find the bugs early and work around them, then a few days before the deadline quickly test on nvidia to confirm that it works.
The ATI fragment shader compiler seems a bit tricky, esp. with reordering or forgetting instruction dependencies when optimizing.

On the texture upload speed thing, I think you are confusing a “conformant” driver with a fast driver (or at least with predictable performance behaviour). A driver may be perfectly conformant running at 1fpm. It just has to folow the spec…

Charles

The Intel windows drivers don’t seem sto be so good.
They even disabled acceleration for screensavers (the driver checks if itt’s called from .exe or .scr) since people kept complaining about screensavers crashing their machines.

On the other hand Intel has given all the necessary information to free driver developpers, so the linux drivers are among the best.

GLslang isn’t supported by the drivers even though the hardware could do it.

Originally posted by Pentagram:
ATI: Develop on this board so you can find the bugs early and work around them, then a few days before the deadline quickly test on nvidia to confirm that it works.
When you say ‘develop on this board’ I assume you don’t mean run your development environment on it. In my experience, ati drivers crash while doing simple gui drawing, such as scrolling a source code view in visual studio…which isn’t very nice, seeing as though visual studio has a habbit of trashing your source code occasionally on system resets or blue screens. ATI and microsoft working together are a developers worse enemy.

> On the texture upload speed thing, I think you are confusing a “conformant” driver with a fast driver (or at least with predictable performance behaviour). A driver may be perfectly conformant running at 1fpm. It just has to folow the spec…

You are perfecly right. A “conformant” driver is not the same as a “fast” driver. I don’t think I’ve confused that.

But what about “quality”. In my opinion, a predictable performance is an important sign of quality. Considering that criterion, NVIDIA has low quality (despite its good stability). I have the impression that we have two kinds of problematic behaviour: NVIDIA is relatively good at stability and conformance, but not so good concerning predictability+performance. ATI has high performance but lower stability and conformance.

To conclude, ATI is better for non-production applications+gaming. NVIDIA seems to be good for production stable usage, but they have to improve their predictability. Both do well for games, but that’s definitely not the main application area of GL.

Another topic: What about the quality of the developer support? Which one is good for game developers, which one is better for medical or other scientific or commercial apps? And what about the wildcats, do they have better support for non-gaming customers? I would suppose so, since they concentrate on the workstation market. Wildcat users, tell us about your opinions, please!