ATI / Nvidia driver differences

I’ll like to compile a list of driver differences for ATI and Nvidia cards that is specific to openGL. Have any of you came across cases where your app worked on one card but didn’t work properly/crashed on the other?

I’ll start this off with a difference in the pbuffer implementation between how ATI & Nvidia -

When you bind/unbind pbuffers - with ATI it will purge all current renderstates, textures, shaders, meaning you’ll have to rebind and reset everything. As far as I know nvidia will perserve the current shaders and there is no need to rebind.

Originally posted by glkat:
When you bind/unbind pbuffers - with ATI it will purge all current renderstates, textures, shaders, meaning you’ll have to rebind and reset everything. As far as I know nvidia will perserve the current shaders and there is no need to rebind.
Huh? Are you talking about wglBindTexImageARB? Then I don’t know how you manage to get that effect. If you’re talking about wglMakeCurrent(), then the pBuffer of course has its own states, so that’s why states need to be set again.

Well, you can check my thread:
http://www.opengl.org/discussion_boards/cgi_directory/ultimatebb.cgi?ubb=get_topic;f=3;t=013533

It’s something that works for ATI using ARB extensions to OpenGL 1.0 but it only works for NVidia using OpenGL 2.0 (and using the latest driver - it doesn’t work with earlier versions). However, this is something difficult to track. NVidia may well fix this problem in their next driver version (at least document it a little bit better), or so I’ve been told.

Latest driver means nothing, it might be official 75.77 or unofficial 75.62 from few days ago…

As far as errors go they are usually bugs that gets corrected after few driver revisions if reported.

I had one bug with nv not allocating max possible pbuffer when asked for smthn. like 1000000 and setting correct vars, dunno if it’s corrected, but one of betta drivers worked as it should later.
I had one with textures dissapearing on all cards except nv, it turned out to be my mistake, and nv drivers being most forgiving. App is broken now as it should, even on nvidia.

You can get a bunch of GLSL errors if your main dev rig is nv based, as Cg (erm… GLSL) is verrry forgiving too. Allways use GLSL validate.

There are some tricky stuff for each driver/HW/vendor, but thats GL…

I have just found another one (for all drivers so far).

When using vertex programs, user-defined clip planes are disabled for NVidia’s cards and enabled for ATI’s. NVidia follows the specification. However, it would be a nice feature.

I have no idea at the moment how I would program user-defined clipping planes into a vertex program. We access one vertex at a time. Even if we move the vertex outside the frustum, the other vertices of the triangles may still lie within the image and cause weird artifacts.

chracatoa, shouldn’t clip planes be done with fragment program performing a KILL ?

Yes, you can’t kill a vertex (I think you should be able to kill a veretx and the whole primitive associated with it - but that would be hard to implement in a parallel environment).

But KILL (in fragment programs) is slower than actually using user-defined clipping planes. I don’t know why.

Actually I once had a fragment program that only multiplied the texture with the color - i.e., doing what the OpenGL implementation already does. When I removed the fragment program the code was 50% faster. This is why I think we should avoid doing anything unless what is absolutely necessary at fragment level (IMHO).

Originally posted by ZbuffeR:
chracatoa, shouldn’t clip planes be done with fragment program performing a KILL ?
That’s a very inefficient way of doing it and it’s very pointless when there are real clip planes in the API already. You get zero fragment processing saves with KIL. Using glClipPlane() you get true geometric clipping that saves you from running the fragment shader on everything that’s clipped away.

Originally posted by chracatoa:
When using vertex programs, user-defined clip planes are disabled for NVidia’s cards and enabled for ATI’s. NVidia follows the specification. However, it would be a nice feature.
Well, at least when you specify ARB_position_invariant clipping should be enabled. Are you getting clipping without it?

One potential subtle difference I found, is that for a test I did it seemed nvidia drivers could implicitly flush buffers to the card upon a Sleep (basically context switch) while ATI would not not.

I found this out the hard way, trying to time som stuff and I got completely off-the-wall numbers for ATI. That did however point out the error in my code, and explicitly flushing before my call to Sleep() put things in working order again for ATI too.

If this really is due to my speculation, or that the (way older) nvidia card used for testing had a smaller buffer (possibly even perfectly sized to flush with the last command) I don’t know. Short on h/w I’m sorry I can’t verify these findings on recent and similarly capable cards.

Humus: no, I didn’t know about that option.

I have just tried to use it (no clipping planes, I just wanted to see what happened since I do not do any fancy transformation in my vertices). Immediately the performance dropped 50%. I guess if you let the OpenGL pipeline do the transformation - while at same time having vertex programs running - you hurt your performance.