PDA

View Full Version : quality / standard conformance of GL implementations



michael.bauer
02-06-2005, 12:43 PM
Hi,

I would like hear some stories about the quality / stability / standard conformance of different vendors' GL implementations. Are there any official or inofficial conformance tests or reference software rasterizers?

I often heard that NVIDIA does a good job (which i can confirm in general).

I heard that ATI has been (or is) not as good.

What about all the current Intel hw (which owns a big part of the low end market)?

-- edit --

Oops, why did I forget 3dlabs, sorry guys ;-) Any stories about the wildcats?

-- end edit --

I don't want to see flames, just pure facts (with concrete examples / proofs), please.

Please spell: n o f l a m e s

michael.bauer
02-06-2005, 12:51 PM
I recently saw strange behaviour with nvidia's texture upload.

The performance can change up to a factor of 4 to 10 or 100 depending on the size of a (2D)-texture.

If m x n works well, we have no guarantee to see n x m working equally well.

In general, you should choose w > h for texture uploads. Is there any explanation for this?

Other example: I saw a 1023 x 64 2D-texture being 4 times faster than 1024 x 64 when used in a simple shader. That's funny, isn't it?

tfpsly
02-06-2005, 10:29 PM
From my experience : I never had a crash while developping with Nvidia cards (or at least I cannot remember one). I had quite often crashes developping with ATI drivers - but that might come from an incompatibility with my nforce chipset :(
PS : but today's ati AGP cards like 9550/9600 have no concurrence yet in my country at a price < 100 euros

michael.bauer
02-07-2005, 10:16 AM
Originally posted by tfpsly:
From my experience : I never had a crash while developping with Nvidia cards (or at least I cannot remember one).
You want to see a crash on nvidia: Just use pixel buffer objects with mapped memory. A wrong buffer size will give you an instant bluescreen (on nvidia).
;)

(Geforce 6800u, release 60 driver, windows xp)

Eric Lengyel
02-07-2005, 10:21 PM
My experience with Nvidia drivers: I've never experienced a crash (that wasn't my fault for doing something like passing a bad vertex array pointer). The drivers have been extremely robust, especially in handling corner cases, and have almost always worked correctly the first time I've tried a new feature. In the very rare occurrence that I have run into a (usually obscure) bug in the driver, Nvidia has been all over it and gotten a fix to me within days.

My experience with ATI drivers: I can't get them to stop crashing. And when they crash, boy do they crash. When your monitor turns off or your computer spontaneously reboots, you know you're not working with the highest quality software. I have also experienced several major problems with features being implemented correctly (don't get me started on point sprites). And when you report a bug to ATI, the response is usually something along the lines of "It's probably you, not us," at least until a couple of months later when they finally admit to it being a driver issue. Wait 3-4 more months and you might get a fix, and if you're lucky that fix might actually work right.

tfpsly
02-07-2005, 10:30 PM
Originally posted by mlb:

Originally posted by tfpsly:
From my experience : I never had a crash while developping with Nvidia cards (or at least I cannot remember one).
You want to see a crash on nvidia: Just use pixel buffer objects with mapped memory. A wrong buffer size will give you an instant bluescreen (on nvidia).
;)

(Geforce 6800u, release 60 driver, windows xp)Impossible : I first code under Linux and then I test my code under Windows :D

Daniel Wesslen
02-07-2005, 11:15 PM
ATI: Sure, ATI may have had poor drivers in the past but it's been a long time since I've had much problems with valid programs not running correctly (except for one current bug that always gets in my way). I haven't seen a single bluescreen since VPU Recover was introduced (Edit: and it has only kicked in twice) and everything usually works as expected.
Incorrect programs report OpenGL errors correctly and crash the application otherwise.

nVidia: None of the current drivers (66.93, 67.66 and 71.40) can run all of my programs without crashing.
66.93 and 67.66 reliably produce bluescreens in a perfectly valid program. 66.93 and 71.40 crashes another valid program when DualView is active. Other programs (including nVidia-supplied demos) run quietly but produce incorrect results when DualView is active.
Texture transfer speeds are unpredictable - a video player works correctly but too slowly to actually be usable.
Incorrect programs often crash instead of reporting OpenGL errors and sometimes produce bluescreens.

knackered
02-07-2005, 11:39 PM
That VPU Recover dialog made me laugh when I first saw it...too hard to fix the bugs in the driver, but we can detect when we've locked up the card, and we can send it a reset signal - problem solved.
Personally, I've had nothing but trouble with the nvidia 6800, I've tried everything, the thing seems to go tits up whenever I try to create a float render target, for instance. Tits up = resets my machine.
So, my current experience is the only people with stable drivers at the moment are 3dlabs.

michael.bauer
02-08-2005, 05:46 AM
Originally posted by Daniel Wesslen:
nVidia: ...
Texture transfer speeds are unpredictable - a video player works correctly but too slowly to actually be usable.
That's my experience, too. Texture transfer speed is highly unpredictable. You have to measure every special case in order to make sure you get good speed. It seems that the driver does lots of weird things internally (that we cannot see or debug). E.g. lots of special cases that are workarounds for broken games or optimizations for special cases/applications. That's the disadvantage of a closed driver/hardware architecture.



Incorrect programs often crash instead of reporting OpenGL errors and sometimes produce bluescreens.I've only seen bluescreen after i've done something more or less evil -- like specifying wrong buffer size values when using PBOs.

michael.bauer
02-08-2005, 06:33 AM
Originally posted by knackered:

Personally, I've had nothing but trouble with the nvidia 6800, I've tried everything, the thing seems to go tits up whenever I try to create a float render target, for instance. Tits up = resets my machine.
So, my current experience is the only people with stable drivers at the moment are 3dlabs.I've used floating point render targets without any problems. (win xp, 6800u, release 65 drivers)

Can anybody else confirm good quality for 3dlabs' drivers. I would be very interested in numbers for transfer rates. e.g. texture uploads, readback, shader performance etc.

michael.bauer
02-09-2005, 05:18 AM
Are there any official / public conformance tests or reference software rasterizers for GL?

3dlabs has a glsl conformance tool that used to tell lots "bugs" for nvidia's glsl implementation.

Aeluned
02-11-2005, 09:51 AM
Transfer rates are altogether better on 3DLabs (I'm talking about the Wildcat realizm 200 vs. the 6800) but I've had major problems getting it not to reboot my machine/hang and/or crash.

Multisampling doesn't exist in a pbuffer context first of all - which for me, sucks.
plus I can make that card reboot my machine faster than you can say: "Trash this pbuffer and give me a new one".

mlb, what you're talking about is glslValidate. nVidia drivers will apparently do some auto-casting or something which allows some shaders that aren't written correctly to run. I wouldn't call that a driver bug, but IMO they should have just made the shader fail to compile, because they aren't letting the developer know they're doing something wrong. I found this out the hard way when I switched from an nVidia board to a 3DLabs one during a brief stint.

michael.bauer
02-13-2005, 09:23 AM
thanks, Aeluned,

nice to hear that transfer rates are better on realizm200 compared to 6800, could you give any numbers?

Unfortunately we see a very instable GL implementation, hmmmmm. Stable pbuffers would be really useful. But let's hope we'll see stable FBO's on the wildcats...This would make pbuffers largely redundant.

> mlb, what you're talking about is glslValidate. nVidia drivers will apparently do some auto-casting or something which allows some shaders that aren't written correctly to run. I wouldn't call that a driver bug, but IMO they should have just made the shader fail to compile, because they aren't letting the developer know they're doing something wrong.

You are perfectly right, invalid code should DEFINITELY fail to compile (nvidia, listen!!!), this would force developers to write conform code (which in fact is very easy, isn't it).

>I found this out the hard way when I switched from an nVidia board to a 3DLabs one during a brief stint

:-) So you tried to switch to 3dlabs but only saw pbuffer and lots of other problems? So, I won't have to try for myself...

Does anybody have good news about 3dlabs? (apart from the fact that the glsl compiler more strictly follows the glsl standard and that transfer rates are good (?) compared to the 6800)

nystep
02-14-2005, 02:00 PM
Hi,

in my case i've been developping some demo 1,5 year ago, and i came over a crash in detonator 44.03 with WGL_ARB_render_texture, whereas the same effect worked fine on radeon mobility. i've tried to render the scene to the normal backbuffer, it was ok, no error code returned, but when rendered to a texture, it just crashed the drivers. weird.

i think it's the only driver crash i encontered in my little programmer life, most others were due to me i think.. ;)

regards,

Zeross
02-15-2005, 04:42 AM
Originally posted by Aeluned:
Multisampling doesn't exist in a pbuffer context first of all - which for me, sucks.I haven't tried it yet but I think that this problem was solved in the driver released last month (4.04.0677).


plus I can make that card reboot my machine faster than you can say: "Trash this pbuffer and give me a new one".Woow I have this card since October and I never had a crash. Before that I was using a Radeon 9800XT and it was a painful experience.

One thing that I've experienced is that invalid programs were still working on the Radeon while nVidia and 3DLabs are less forgiving.

michael.bauer
02-15-2005, 07:22 AM
> One thing that I've experienced is that invalid programs were still working on the Radeon while nVidia and 3DLabs are less forgiving.

Hmmm, invalid programs that are still working, I would call that voodoo. Did these programs ignore error flags or didn't ATI's GL detect and set the error flag? One thing I would request for any vendor is correct error behaviour -- as defined in the GL specification. If code does something bad, an error status should definitely be set. If broken code works nevertheless, I don't mind, but it SHOULD set an error. When an application ignores the error, that's the problem of that application and not a problem of the GL implementation.

It's the same problem as ignoring bad GLSL code. This encourages bad code and makes switching to other hardware impossible. An error is an error -- we shouldn't ignore it!

Pentagram
02-15-2005, 10:27 AM
My general experience is:

Nvidia: Stableish but don't use them for nuclear reactor guis. (i.e. I had them crash) But extensions are generally reliable even if you use them in weird combinations not many people may have used before.

ATI: Develop on this board so you can find the bugs early and work around them, then a few days before the deadline quickly test on nvidia to confirm that it works.
The ATI fragment shader compiler seems a bit tricky, esp. with reordering or forgetting instruction dependencies when optimizing.

On the texture upload speed thing, I think you are confusing a "conformant" driver with a fast driver (or at least with predictable performance behaviour). A driver may be perfectly conformant running at 1fpm. It just has to folow the spec...

Charles

PkK
02-15-2005, 10:44 AM
The Intel windows drivers don't seem sto be so good.
They even disabled acceleration for screensavers (the driver checks if itt's called from .exe or .scr) since people kept complaining about screensavers crashing their machines.

On the other hand Intel has given all the necessary information to free driver developpers, so the linux drivers are among the best.

GLslang isn't supported by the drivers even though the hardware could do it.

knackered
02-15-2005, 11:18 AM
Originally posted by Pentagram:
ATI: Develop on this board so you can find the bugs early and work around them, then a few days before the deadline quickly test on nvidia to confirm that it works.When you say 'develop on this board' I assume you don't mean run your development environment on it. In my experience, ati drivers crash while doing simple gui drawing, such as scrolling a source code view in visual studio....which isn't very nice, seeing as though visual studio has a habbit of trashing your source code occasionally on system resets or blue screens. ATI and microsoft working together are a developers worse enemy.

michael.bauer
02-15-2005, 11:33 AM
> On the texture upload speed thing, I think you are confusing a "conformant" driver with a fast driver (or at least with predictable performance behaviour). A driver may be perfectly conformant running at 1fpm. It just has to folow the spec...

You are perfecly right. A "conformant" driver is not the same as a "fast" driver. I don't think I've confused that.

But what about "quality". In my opinion, a predictable performance is an important sign of quality. Considering that criterion, NVIDIA has low quality (despite its good stability). I have the impression that we have two kinds of problematic behaviour: NVIDIA is relatively good at stability and conformance, but not so good concerning predictability+performance. ATI has high performance but lower stability and conformance.

To conclude, ATI is better for non-production applications+gaming. NVIDIA seems to be good for production stable usage, but they have to improve their predictability. Both do well for games, but that's definitely not the main application area of GL.

Another topic: What about the quality of the developer support? Which one is good for game developers, which one is better for medical or other scientific or commercial apps? And what about the wildcats, do they have better support for non-gaming customers? I would suppose so, since they concentrate on the workstation market. Wildcat users, tell us about your opinions, please!

michael.bauer
02-15-2005, 11:52 AM
> ATI and microsoft working together are a developers worse enemy.

That's funny. Is this due to ATI's GL implementation? That would be a sign of very poor quality. Somebody used to say, there are two good hw/sw combinations: NVIDIA+GL+LINUX and ATI+DIRECTX+WINDOWS, but that seems to be a bit exaggerated, and the citation originates from GeforceFX times that are definitely over.

Another question: Is it a good idea to switch to Direct3D, concerning quality / predictability / performance? That would make us platform dependent, but that would be an option for lots of commercial projects.

Korval
02-15-2005, 01:03 PM
NVIDIA is relatively good at stability and conformance, but not so good concerning predictability+performance.I wouldn't go that far. Your only problems with performance predictibility are in areas that most people do not excercise: texture upload. Since people do not typically perform these kinds of actions in the middle of real-time applications, there isn't much of a problem. From nVidia's perspective at least.

PsychoLns
02-15-2005, 01:25 PM
> One thing that I've experienced is that invalid programs were still working on the Radeon while nVidia and 3DLabs are less forgiving.

From my experience it's the other way around, ie. that nvidia is more forgiving, ofcourse especially in relation to GLSL.

Well, apart from that I've just spent the last couple of days updating our nvidia support, meaning that I've switched from 9800pro to a 6800gt.

Regarding performance the 9800 is consistently faster even though it is running a bit slower path (cpu skinning of characters). I guess (haven't investigated it further yet) that especially the per-batch cost is much lower in the ati driver. Could be interesting to compare the 5.3 catalyst with those from the pre-doom3 days... ;)

Most issues were related to GLSL. On the positive side all our 4 existing nvidia bug workarounds could be removed with the 71.83 drivers, meaning that we are really down to zero nvidia issues, while our 2 current ati (uniform arrays and lightsource init) problems still exists in the latest beta.
Then we have the cgc compiler used for compiling glsl as well as cg etc. I takes about 0.3 second for a single shader compilation meaning generating lots of shader combinations simply isn't possible (we had a 20 secs startup time because of that).
Another big problem with the cgc method is that all compiler errors are reported from the preprocessed glsl->cg code, meaning that the error message are more or less useless. As we know nvidia-glsl is much more forgiving than real glsl so don't expect your shader code to on anything else if just tested in the nvidia driver (strict mode can be enabled in nv-emulate but that also produces lots of bogus warnings of the type "implicit cast from float to float"...).
So... developing glsl stuff that works on other hw than your own is currently most convenient on ati drivers - especially as nvidia's compiler tools (the standalone cgc and nvshaderperf) works fine on non-nvidia hw for compiler-compatibility and performance analysis. On the other hand the ati compiler seems a good deal more buggy at the moment (so i guess it is dependent on how much compatibility you need at the moment)

Regarding driver crashes (not caused by agpx8 hw issues (or OC)) I have only seen them on forceware for the past year...

bobvodka
02-15-2005, 04:53 PM
Originally posted by knackered:

Originally posted by Pentagram:
ATI: Develop on this board so you can find the bugs early and work around them, then a few days before the deadline quickly test on nvidia to confirm that it works.When you say 'develop on this board' I assume you don't mean run your development environment on it. In my experience, ati drivers crash while doing simple gui drawing, such as scrolling a source code view in visual studio....which isn't very nice, seeing as though visual studio has a habbit of trashing your source code occasionally on system resets or blue screens. ATI and microsoft working together are a developers worse enemy.And by contrast, i've had an ATI board since shortly after the 9700pro was released and I've had a few problems here and there over the years but non more so than my NV drivers before hand and they were all limited to games. Infact, since my last windows reinstall (due to me breaking windows) this machine has crashed the sum total of once since around the middle of last year and I'm not sure that was a hardware issue and might be related to the dodgy power setup in my room.

So, I'd argue there is no hard and fast rule about which is better, at best you can deal with general-ness at best and even that relies on something not being broken else where...

V-man
02-15-2005, 05:48 PM
Originally posted by Korval:

NVIDIA is relatively good at stability and conformance, but not so good concerning predictability+performance.I wouldn't go that far. Your only problems with performance predictibility are in areas that most people do not excercise: texture upload. Since people do not typically perform these kinds of actions in the middle of real-time applications, there isn't much of a problem. From nVidia's perspective at least.Funny you should say that. A long while ago, I read a NVidia pdf that recommended how to load resources. They said something like first load texture, then VBOs, then shaders. It may not apply today.

As for the GL conformance test, it's probably more about pixel precise rendering and having a certain level of driver quality, but I'm sure most of the responsibility lies on the IHV's shoulders.

michael.bauer
02-16-2005, 04:22 AM
> Your only problems with performance predictibility are in areas that most people do not excercise: texture upload. Since people do not typically perform these kinds of actions in the middle of real-time applications, there isn't much of a problem. From nVidia's perspective at least.

So we have to accept that texture upload in real time apps is evil, I don't agree with that. And it is contrary to everything we see in the commercials. They want to make us think, that graphics hardware is good for "streaming" textures (see PBO specification). Why do we get more and more bus bandwidth (with PCI-Express) if the driver is sooo limited, that's absurd and annoying from my perspective.

Is it so complicated to get near the maximum bus bandwidth for uploads or downloads on current hardware? Or is it irrelevant for the manufacturers? What's the reason? It's an important topic for general purpose apps, to get good+predictable streaming performance.

-- edit --
NVIDIA: why aren't we able to use the bandwidth of PCI-Express (or even AGP), does the driver limit that? PCI-Express should give us 4GB/s and AGP8x should give us 2GB/s. I know that we cannot expect peak bandwidth, but, if the driver has a good day ;-), it's transferring approximately 1GB/s, so where does the missing factor of 4 come from?