OpenGL + CG + ATI = ??

Hi

Are there any happy (??) ATI owners ?
I need to know what CG profiles are supported under OpenGL on radeons HD.

cgGLGetLatestProfile(CG_GL_FRAGMENT) returns always CG_PROFILE_ARBFP1 on radeons - so only ARB fragment program 1 is supported ?!

I do not own radeon so i cannot test this - but what about CG_PROFILE_FP40 ? (or at least CG_PROFILE_FP30) on ATI
(i need tex2Dlod to work)

Thanks for any information.
(I could as on nvidia forum about cg – but hmmm :slight_smile:

Best you can use on ATI is CG_PROFILE_GLSLF/V. You must specify it manually.

Hi

Unfortunately on ATI cards the only supported assembly profiles are CG_PROFILE_ARBVP1 and CG_PROFILE_ARBFP1. However CG can compile to glsl with CG_PROFILE_GLSLV and CG_PROFILE_GLSLF. But there are some issues with glsl profiles, you can read them in the nvidia cg forum.

Hi,
I have firepro v7800 firegl . Is this post still acurrate or does ATI now support more profiles. For sure it doesn’t seem to support
CG_PROFILE_VP30
CG_PROFILE_VP30
CG_PROFILE_FP30
CG_PROFILE_FP40

how can I find out which profile does it support Other than trial and error.

thanks,
Francisco

ATI does not, and is not going to, specifically support any CG profile. The only reason the ARB profiles are supported is because ATI supports the ARB_vertex/fragment_program extensions which those two CG profiles use.

If you want to use CG on ATI hardware, you have to compile it to GLSL. I think there’s a profile for that.

As Alfonse said, arbvp1/arbfp1 profiles I believe are about all that may be useful to you on ATI.

In other words, when it comes to CG, ATI sucks?
I asked because I have some samples that do not run well with the profiles mention and the profile to use was another one.

thanks,
Francisco

In other words, when it comes to CG, ATI sucks?

No, when it comes to CG, it is owned by NVIDIA, so ATI doesn’t care. “Sucking” would imply that ATI ought to be doing something but failing. There’s no reason that ATI should be supporting a technology that is wholely owned and promoted by their primary competition. Expecting them to do so is entirely unreasonable.

Yeah, I agree… still , it sucks for me, since I have some samples in cg.

CG is an NVIDIA proprietary shading language. Blaming ATI for sucking is, frankly, ridiculous and completely wrongheaded.

Nvidia back in the day attempted a hail-Mary play against collaboratively drafted open standards when they offered Cg as a take it or leave it option to the ARB.

Cg lost and glslang became the standard. Whatever you may think of that decision that’s the current reality and glslang is supported on everything from cellphones to the desktop. Live with it.

You could argue that NVIDIA sucks by perpetuating Cg updates and causing your portability problem, but the real source of your woes is your own decisions. Just consider the possibility that Cg may be available and promoted because NVIDIA is quite comfortable for you to be coding preferentially for their hardware and encountering difficulties porting elsewhere.

Bottom line glslang is the OpenGL standard, Cg is not.

I didn’t mean to start a whole controversy in the topic. I corrected myself by saying that it sucks for me. I just happen to have some samples in cg. that’s all… no reason to get all emotional about it. it is just code!

Nvidia back in the day attempted a hail-Mary play against collaboratively drafted open standards when they offered Cg as a take it or leave it option to the ARB.

Not to derail the topic further, but that’s… not entirely accurate.

Yes, NVIDIA presented Cg as a “take it or leave it” option. But GLSL was also presented that way, from 3DLabs. Sure, the ARB poked at the language a bit, but calling GLSL “collaboratively drafted” is not accurate to how it ultimately worked out. GLSL is more or less what 3DLabs proposed. The broken compilation model, the lack of emphasis on vectors in implementation limits (always talking about “components” ie: floats, rather than vec4’s. This was in opposition to most hardware, and it played into 3DLabs’s hardware which was scalar based), the lack of emphasis on shader-based control (no explicit attribute location, etc). I’m not saying these were bad (though the first and third are pretty indefensible), but they were different from the norm and exactly what 3DLabs proposed.

GLSL 1.00 was not exactly identical to what 3DLabs proposed, but it was very close to it. So while it may have been touched by the ARB, it would be hard to argue that it was the result of a collaborative process. The ARB basically tweaked it and stamped their approval on it.

I would have to defend nVidia’s Cg. The high level compiler was separate from the driver. You have profiles. You get information about number of low level instructions your shader will be using. The low level shaders had glGet to know what the hw capabilities are. No built in glLight and glMaterial and ModelProjection and crap.

GLSL went straight into the driver and was a black box. It went through a lot of revisions and ended up where Cg is.

Did I mention the “separate shaders” is also the way to go.
Did I also mention that “getting the compilation result” is also the way to go (binary blob).

The ARB was a closed system then (and perhaps now). They weren’t ready to listen to what people here were asking for.

Agreed, on all points.

While I don’t use Cg as a primary shader language anymore (only because of the lack of strong cross-vendor support)…

…I do use its stand-alone compiler daily or every few days on GLSL shaders because it gives me crucial optimization information that GLSL won’t, and let’s me compile my shaders in various permutations off-line.

So to NVidia: many thanks for providing GLSL compilation support in cgc! …maybe one day GL will make it to a cross-vendor glslc:stuck_out_tongue:

(For those that have no idea what I’m talking about, install Cg, and do this:)


cgc -oglsl -strict -glslWerror -profile gpu_vp vert.glsl
cgc -oglsl -strict -glslWerror -profile gpu_fp frag.glsl

@DarkPhoton

I try your suggestions but I get lots of error. See this thread if you have a moment:
http://www.opengl.org/discussion_boards/…amp;#Post298037
thanks,
Francisco

I pretty much agree with this. Although I liked the glLight and built in ‘crap’ :stuck_out_tongue: If i designed things there would just be assembly. Then have some seperate tools to convert a GLSL language down into assembly. That way even intel wouldn’t [censored] up, since they would only need to support assembly, and you can’t go too wrong there.

I’m trying to use the cgc to convert CG code to GLSL, however, the command above does not work. I think the reason is that command is to compile code already in glsl.

do you know if this can be done and how?
thanks,
Francisco

The high level compiler was separate from the driver. You have profiles. You get information about number of low level instructions your shader will be using. The low level shaders had glGet to know what the hw capabilities are. No built in glLight and glMaterial and ModelProjection and crap.

GLSL went straight into the driver and was a black box. It went through a lot of revisions and ended up where Cg is.

It most certainly did not end up where Cg is.

There are no “profiles” in GLSL. The closest thing you have to that is the #version directive, but that’s just a way to tell what version of the language you’re using.

GLSL always had glGets for implementation details. It still doesn’t have glGets for running out of instructions, because that’s not something you could ever correct for.

And not having access to fixed-function state would have been a detriment to the adoption of GLSL. It would have made GLSL shaders all-or-nothing propositions for users. You either use GLSL everywhere, or you use it nowhere. Even the ARB assembly programs had ways of accessing fixed-function OpenGL state.

Yes, you can look back now, 6 years after the fact, and say that now we don’t need it. But we did need it then, and it was very important then.

Getting binary blobs is very different from compiling to an intermediate language. Binary blobs are black-boxes. They are not available in any format that people are expected to be able to read. The purpose of this functionality is entirely different from compiling to an intermediate form.

An intermediate form is an interchange language. If the binary blobs were like Cg profiles, then you could take a binary blob from one implementation and compile it to another implementation. You cannot. Indeed, you can’t even be sure that your current implementation will be able to read the binary blob it gave you.

No, the ability to get program binaries has exactly and only one purpose: to potentially speed up subsequent execution of programs by preventing you from having to re-compile shaders. And note that Cg doesn’t provide this, since the results of Cg still have to be compiled. Again, you don’t need to parse the Cg sources, but you still need to parse the results. You still have to do all of the optimization work, which is the lion’s share of the effort. Get program binary avoids virtually all compilation overhead.

So no, GLSL has not ended up where Cg is. You can say GLSL “ended up where Cg is” when it compiles to an assembly language that is further compiled into an OpenGL object.

Also, this: “You get information about number of low level instructions your shader will be using,” is not even remotely true. The “low level instructions” bear absolutely no resemblance to the actual language used internally by the GPU. Once upon a time, it did (and even then, it was an approximation). But modern VLIW/SIMD-based GPUs are so fundamentally different as to be unrecognizable.

The “low level instructions” have to go through about as much compiling to fit modern hardware as GLSL. Sure, they could use a simpler text parser. But that’s about it.

3D Labs may have been incredibly self-serving by defining GLSL in terms of scalars rather than vectors, but they were right in the end. All modern GPUs use some form of VLIW/SIMD instead of purely vector opcodes.

That way even intel wouldn’t [censored] up, since they would only need to support assembly, and you can’t go too wrong there.

We’ve talked about this. The majority of driver bugs from shaders come from failing to implement the functionality correctly, not failing to implement the parser. Parsing assembly vs. parsing GLSL is easier, but they’re all converted to the same representation. It’s the process of converting that representation into actual GPU code where most of the problems lie.

Sure, I agree. You don’t compile GLSL to a lower profile and so you don’t get to debug it. The only way to debug it is to write alternate versions of your code or try different drivers or try another machine.

As for binary blobs, it sure took a while to get this. I would consider this the low level code that helps us bypass the time consuming glsl compiler.

GlLight and the rest? Common! That is dead end.
Uniforms are part if the shader? Common, what were they thinking.

glGet to get some info about the hw and your arb vp fp shader? I didn’t hear anyone complaining.

The beginning of GLSL was really bad.

GlLight and the rest? Common! That is dead end.

First, “Common” is not the same thing as “Come on.”

Second, they are a dead end now. At the time, they were vital towards people adopting GLSL. Again, ARB assembly had the ability to map fixed-function data; why not GLSL?

Uniforms are part if the shader? Common, what were they thinking.

What do you mean by that? ARB assembly had program local parameters too; they are functionally identical to GLSL uniforms.

The only difference is that ARB assembly had program global parameters too, which GLSL did not have. Is that what you’re talking about?

glGet to get some info about the hw and your arb vp fp shader? I didn’t hear anyone complaining.

Again, I have no idea what you’re talking about. We had glGets to get information about hardware for GLSL shaders too.

The beginning of GLSL was really bad.

I agree. That doesn’t mean that GLSL has “ended up where Cg is,” which is the claim I am disputing.

The principle problems with GLSL were:

1: Lack of global state that isn’t OpenGL fixed-function state (solved with UBOs).

2: The virtually useless concept of linking programs, which pretty much everyone complained about. Naturally, those complaints fell on deaf ears (solved with separate_shader_objects, though the extension needs doctoring).

3: The mostly useless shader/program object distinction, which oftentimes causes multiple compilation of the same data (again solved with separate_shader_objects).

4: The inability to set certain program configurations from within a shader (mostly solved with explicit_attrib_location).