Cg vs. GLSL

Hi!

I would like if it is worth to learn Cg and its API when I already know GLSL and how to interface it with OpenGL.

I mean I know of some possible advantages:

  • [li] multiple target profiles: for instance targetting ARB vertex and fragment programs would basically ensure your shader will run on SM 2.0 cards which you can’t easily know from a GLSL shader unless you try to compile on a card that only supports SM 2.0.[/li]This is probably the biggest advantage over GLSL.[li] you can use #includes in you Cg shaders. This is not possible in GLSL at the moment but would not be too hard to add emulate either.[*] There is GgFX similar to DirectX FX files. I never tried this but it might be a nice feature although for a big application one would probably roll its own anyway.

Do you know of any other advantages or disatvantages compared to GLSL.

What about OpenGL 3.0 which hopefully will appear soon. Will Cg work with GL3 or is it a dead end?

[ www.trenki.net | vector_math (3d math library) | software renderer ]

CgFX can implement different techniques. this is good if you implement techniques with one profile and that profile is not supported you can use another technique with different profile. besides, techniques and passes allow for easier change of effects instead changing code in your app. also state changes are easier to setup in CgFX and that same GLSL is used in CG if you specify the glsl profile.

Hope that helps …

One of the nice things about Cg (the compiler, not the language) is that you can compile GLSL code to one of the profiles and it also lets you know instruction count and register count.

It is no guarantee that if you send the GLSL to an ATI, that it would accept or not, but it gives an idea.

Also, I think you can compile Cg to GLSL and vice versa but I’m not sure how to do that.

Will Cg work with GL3 or is it a dead end?

There’s really no way to know, but I’d expect that nVidia would not abandon it so quickly. That is, I would expect nVidia to continue to have a Cg compiler in their drivers.

Also, Cg isn’t portable to ATi platforms (outside of using the ARB_*_program stuff), so be careful of that.

Cg can compile to GLSL now, which means SM3 features for ATI as well. The Cg runtime is still a bit buggy, ie parameter array upload for GLSL profiles and a few other things are broken, but they have constantly improved these… and hopefully get it right soon. Nevertheless you could always just use Cg as compiler and organize the compiled shaders with parameters and such yourself…

I like Cg for the conveniance factor (includes, parameter connecting…), although of course their GLSL output is optimized for their own stuff, so no clue how good ATI’s compiler can manage to optimize that vs a “normal” GLSL code… havent tried to compare that yet with ATI’s shader analyzer.

While I can’t say for certain that Cg will have GL3 bindings, I’d be incredibly surprised if it didn’t. To state my bias outright, I have a strong preference for Cg over GLSL. (Please keep in mind that I’m stating my preference, but your milage may vary.)

The PS3 uses Cg ( http://en.wikipedia.org/wiki/Ps3_games#Development ), so I don’t expect Nvidia to drop it any time soon.

While separate from Cg, CgFX is still fantastic, and if you’re not familiar with using effect files, try them, you’ll like them. “Rolling your own” with GLSL is a bit ugly. Take a look at Kevin Bjorke’s comments on GLSL at http://developer.nvidia.com/forums/index.php?act=Print&client=printer&f=6&t=15

I find that if I use Cg and compile to vendor-specific extensions, or to ARB_* asm-style extensions, I’m less likely to have problems than when I use GLSL. Vendors have to supply GLSL compilers, and certain vendors don’t do a very good job. Some don’t seem to bother to fix known issues on slightly older hardware because they’d rather people upgraded. The flip side of that is that Cg’s output will be optimized only for Nvidia cards. I’ve haven’t seen benchmarks of Cg’s output running on another vendors board vs that vendors GLSL implementation, but I expect Cg would be a slightly less performant.

One last gripe about GLSL is the lack of non-square matrices w/o extensions; Not everyone supports GLSL 1.2.

That said, Cg/HLSL aren’t perfect either, and people have grips about them. Since those people aren’t me, I’ll leave it to them to gripe; I don’t need to argue on their behalf :slight_smile:

Clearly things will change with GL3; The ARB is aware of the issues and they’re going to try to do “the right thing”, but I don’t expect to see Cg going away. I think it’s worthwhile investing a couple of days learning to use it.

Andrew.