Is same OPENGL/GLSL code l work for different hardware

Dear All,

I have written an application using OPENGL/GLSL code and the application uses NVIDIA 7800GT Hardware for Rendering. I am also using glext library.

I am going to change my Hardware from NVIDIA to ATI.Whether my source code will work without any change.Or should I change as per the Hardware standards.

RAJESH.R

Well, glslang is cross-platform, but no particular shader is guaranteed to compile under any particular implementation. Nor is it guaranteed to run in hardware.

Plus, nVidia’s glslang compiler isn’t exactly 100% compliant. It lets you do things that strict compilers (ATi’s, for example) don’t.

And that doesn’t even deal with the difference in bugs between the two.

So you should set aside time to do fixing up for your glslang code if you’re making this switch.

Well, let it put me this way: If you develop your shader on ATI it is quite certain it works on NVIDIA without problems.

If you develop on NVIDIA you have a good chance your shader will crash your application on ATI.

Or the other way around: ATI drivers are really bad when it comes to OpenGL and crash on every occasion (for example a function in the shadercode that is not called often crashes the whole application…)

Regarding nvidia being not compliant: This is no longer true for current drivers. You can specify the strict pragma in your shader code and you can use NVEMULATE to turn on a portability warnings.

There was a tool for verifying if your shader was valid on www.3dlabs.com but seems as if they removed the link.

“If you develop on NVIDIA you have a good chance your shader will crash your application on ATI.”

Well, I don’t think his app is that lousy.

Originally posted by V-man:

Well, I don’t think his app is that lousy.

Shure. But the ATI-drivers are…

I developed on GeForce and then ported to Radeon. Some hints based on my experiences:

GLSL:
-don’t use gl_FrontFace - forces software rendering on many GPUs
-don’t use more than 8 varying variables (note that you should include such thigs as gl_Color, gl_TexCoord in this count)
-gl_FragCoord is implemented as varying on Radeon, so you should count it, too.
-accessing texture from vetex shader works only on GeForce 6 and above. Not implemented in any Radeon currently available
-dynamic branching is supported on GeForce 6 / Radeon X1k

Aplication:
-if you detach shader object from linked shader program, your program will be broken on Radeon
-FLOAT16 texture filtering is supported only on GeForce 6 and above
-FLOAT16 blending and alpha testing is supported on GeForce 6 and Radeon X1k
-glGenerateMipmapEXT can sometimes crash on ATI

Please correct me if I’m mistaken in some parts and if there is something else then add your 3 cents.
It’s also possible that some information is out of date, but you can always run into an user with drivers not up to date.

the 3Dlabs tools are still available at http://developer.3dlabs.com/downloads/index.htm

Originally posted by V-man:
There was a tool for verifying if your shader was valid on www.3dlabs.com but seems as if they removed the link.

You can specify the strict pragma in your shader code and you can use NVEMULATE to turn on a portability warnings.
I don’t consider that compliance. Compliance should be the default, and non-compliance should be exposed by a proper extension.

Ok, first correction to what I wrote:
Alpha test with FLOAT16 render targets is supported on Radeon X800. Not sure about X600 and X300.

One more thing I didn’t mention:
gl_ClipVertex is not supported by Radeon 9 / Radeon X (not sure about X1k).
On NVIDIA clipping will not work if you don’t write to it. On ATI shader will run in software if you write to it, but clipping will work if you don’t.
I’m using this code to deal with this issue:

#ifdef __GLSL_CG_DATA_TYPES
  gl_ClipVertex = gl_ModelViewMatrix * gl_Vertex;
#endif

__GLSL_CG_DATA_TYPES is defined on NVIDIA GPU’s.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.