Newbie question: Cg vs glslang?

Hello – i’ve been trying to learn how to program on GPU’s for scientific application purposes, but i’ve never even used OpenGL before, so there are some basic things i’m trying to figure out:

what is the difference between Cg and glslang?
is it possible to run Cg scripts on ATI cards through OpenGL?

essentially, if we want to speed up our calculations with a GPU, should we use Cg or glslang or something else? it would be nice to allow it to run on any computer, not just windows, and it would also be nice to let it run on ATI as well as nvidia…

i know almost nothing about Cg… but you can run cg scripts on ATI vga with OpenGL for sure!

for a person that have no idea(like me) about the speed of the shader stuff in opengl make think this:

ARB_fragment_program : fastest
OpenGL SLang : faster
CG : fast

but i dont think this is correct… :stuck_out_tongue:

Cg will run on ATi or Nvidia cards, just use the ARB shader profiles; Obviously so will GLSL.

The real difference between them is syntactic, and where they’ll run. Speed probably won’t be an issue.

Syntax is a matter of style really, but I have a strong preference towards Cg. (Check out Cg’s shader interfaces and unsized arrays. Theres a good overview of them in GPU Gems, Chapter 32.)

Other people have good reasons to choose GLSL, and I’m sure they can evangelize them enough that I don’t need to. Either way, both are viable solutions.

As far as I know, GLSL will run on Windows and Linux, Cg on Windows, Linux, and OS X. GLSL support will eventually be added to OS X, probably about the time we see Tiger.

DN.

Speed is something of an issue.

nVidia and ATi cards have different preferences in terms of optimized ARB_fp code. nVidia cards like to minimize temporary register use (regarldess of other factors), but ATi cards don’t care. So nVidia-optimized code doesn’t run as fast as ATi-optimized code. And the Cg compiler to ARB_fp spits out nVidia-optimized code (not unsurprisingly, considering who writes the compiler).

I think Cg compiler has a flag for disabling the NV specific optimization.
shadertech.com is the official site.

The way it works is that the Cg compiler spits out ARB_vp/fp code (or if you wish NV_vp/fp), but ARB_vp/fp suck.

GLSL?
ATI & NV GLSL compilers need more work.

It’s hard to choose isn’t it? :slight_smile:

ARB_fragment_program : fastest
OpenGL SLang : faster
CG : fast
Cg might be ahead of GLSL. I’m not sure myself, but considering that it compiles for ARB_vp/fp

speaking of osx… has anybody run the nvidia sdk samples on osx? i was having some problems creating an xcode project, and was wondering if someone wrote a makefile or something…

If you are wanting to do full scientic calculation using the GPU, and have no background in OpenGL programming, you will probably be better off looking at either Brook or SH. These are specialist language/system combinations specifically designed for exactly the sort of task you are wanting to do. Since you have to learn something, may as well go for a system that will do everything you want in a structured system closer to what you need. Brook is probably closer to your needs than SH as SH can still do traditional graphics rendering where Brook is firmly bedded into the stream-based programming model.

yea, we were looking at the Brook posters at the recent GP2 conference. the thing that i’m trying to figure out is whether or not to spend time with Brook when we may end up wanting to write the code in OpenGl/Shader language ourselves anyway. our CS prof connection was talking about how assembly was really easy and we’ll probably wind up spending a lot of time on that level too. Brook may be too abstract for our needs… but we’ll play with it anyway probably.

Originally posted by V-man:
[b]

[quote]ARB_fragment_program : fastest
OpenGL SLang : faster
CG : fast
Cg might be ahead of GLSL. I’m not sure myself, but considering that it compiles for ARB_vp/fp[/b][/QUOTE]Correct me if i’m wrong - but Cg would appear always to have the opportunity of being faster than GLSL as the compiler is outside of GL, and as such allows the possibility to hand optimise…

If you were to hand optimize for a known video card then yes it is possible for Cg to be faster.

For all video cards it becomes difficult. (Nvidia with num of register usage and precision issues and ATI with tex indirection, instruction count and swizzle issues)

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.