HLSL vs Cg = The Poll

So the ARB want our input!

I’m no expert in HLSL/Cg so perhaps brighter minds would like to use this thread to cover the basic differences for me?

I notice that both have built-in noise functions Perhaps Ken Perlin is about to become a rich man with his standardised noise funtion (Siggraph 2002) being implemented in hardware!

Rob.

We have played around with both of them. In my opinion, the relevant differences are the following:

a) Passing uniform and variant data:
3dlabs HLSL passes all data by global variables, both uniform and variant data in the same way, using declarators to declare uniform and variant variables.
Cg passes all arguments as function parameters. Variant data is handled different from uniform data: Variant data is passed as structure you have to declare before. Somehow the Cg approach seems to be more “low-level”.

b) Standardization of build-in functions:
3dlabs HLSL yes, Cg no.
In Cg, build-in functions are part of the profile. Depending on the profile, different build-in functions are available. The idea of Cg profiles is to have different profiles for different hardware. So Cg profiles are somehow a new “driver layer” above OpenGL / Direct3D, exposing also different build-in functions to the shader program. For example, there are different texture lookup functions depending on the Cg profile you use. In practise this means that a Cg shader must be written for a particular profile to compile. Also if the hardware feature is the same, it is not guaranteed that different profiles provide this feature by the same function.
A 3dlabs HLSL shader source is always the same. Theoretically every 3dlabs HLSL shader also runs on every hardware / driver. But in practise it will happen that a shader fails to compile and run if it uses features which are not available on this hardware / driver.

The poll doesn’t seem to reflect what has been discussed various places on The Net. I was under the impression that most people were looking forward to GL2.0 ( and especially the HLSL ). Currently Cg has 45% of the votes and 3Dlabs’ GL2.0 HLSL has only 16%. Does anyone else find these numbers hard to believe ?

Hmmm, the results have changed dramatically since this morning.

I don’t think that this poll is really designed to get a statistical feedback for the ARB (since everyone knows that such polls are not representative in any way and easy to cheat).

I’m quite sure that also the idea of this poll (as all polls on opengl.org) is to create attention to this topic. Compared to classical news, polls let the reader ask themself “what do I think about it?” and let them partipicate discussions (personally with other people, in the opengl.org discussion formums etc. etc.) And (technical) discussions are the most valuable input for the ARB.

Originally posted by folker:
I don’t think that this poll is really designed to get a statistical feedback for the ARB (since everyone knows that such polls are not representative in any way and easy to cheat).

Agreed.

One should not forget Cg has been launched by what Dave called “The NVidia marketing machine”, whereas GL2.0 HLSL has been less reviewed by end developers I think.

Asking “do you prefer a language with many demos and shaders sources available around there, or this new language?” is quite unfair and does not reflect the capabilities of both languages.

Not to talk about NVidia’s IP on Cg…

Julien.

Well, it’s 6:00pm Denver time and the tables have turned again. OGL2 on top right now.

Cg does need a bit of cooking before it becomes a proper “high-level” language.

But, actually, allowing profile/implementation-defined functions is a pretty good idea. It would work well with the following conditions:

  1. There is the notion of a standard shader. That is, all implementations must define some simple subset of the language. The standard should, perhaps, be very smiple (simple enough to go in a GeForce 1, possibly. That is, not requiring dependent texture access and so forth).

  2. It is well understood that these extensions are not necessarily platform-neutral. They should be like regular OpenGL extensions.

  3. It is understood that the ARB will increase the functionality of the language with their own extensions. If the basic standard doesn’t support dependent texture accesses, then there should be an ARB_dependent_texture extension function that one can call from a shader to do the dependent texture addressing. This extension would be made avaliable to all who provide this functionality.

Korval.
So you saying that Cg should be like a extension for vendor specyfic extensions?
The whole point of OGL2.0 is to standarize the stuff.
making another vendor specyfic stuff is pointless.

The point I was making is that the language should allow you to querry which functions are avaliable on the given implementation. That way, not everybody is forced to implement a per-pixel “log” function; if someone does, I can use it if I want.

Yes, this is much like vendor-specific extensions today. That’s why the 3rd part is important; the ARB will frequently standardize new functionality. Taking the “log” function as an example, the ARB would quickly standardize “logARB” and each individual implementation would be able to test to see if said functionality is avaliable. That way, we won’t be seeing “logNV” and “logATI” unless they implement theirs in a way that isn’t in line with what the spec says that “logARB” should behave. It still allows for “logNV” and “logATI”, but those are mainly for functionality that isn’t properly implemented yet.

The reason the current state of affairs is so bad is because the interfaces to everything are so wildly different. Look at NV_vertex_program and EXT_vertex_shader: same functionality, wildly different interface. This type of extensible shader solves this problem. Not only that, there is a basic subset of required functionality in this paradigm that an implementation must implement.

Maybe.
But that eventualy would lead to vendor specyfic stuff.
And then just to the mess like today - multiply cold paths.
And look at the ARB.
It does work very slow - stalled by all those pesky IP problems etc.

About implementation-dependent functions: I think this is a backwards looking approach: You try to both support all past and existing hardware. This gives exactly the mess of different vertex program / fragment program versions we already have.
I think a future looking approach should avoid this path.

An important aspect is the following: Current hardware is not far away from the highest possible level of a fully programmable vertex / fragment unit anyway. Basically only dependent texture access and real jump / loops are missing. But this will be soon supported by future hardware anyway. But then we have reached our goal, the features of the vertex / fragment unit are fixed (in the same way as the Renderman HLSL is basically fixed and not envolving anymore), and future hardware will “only” improve performance.

Thus, because it is clear that hardware will support a full-programmable vertex/fragment unit anyway in near future and this functionality won’t change anymore, we definitely should define that as standard for a forward-looking API like OpenGL 2.0.

[This message has been edited by folker (edited 07-21-2002).]

I agree that there should be a forward looking shading language that standardizes the interface to future hardware. However, having different profiles might still be useful, as long as there is a forward looking standard OpenGL 2.0 profile. That way, you have a forward looking standard but it’s easier for vendors to support older hw. Having a GeForce-level standard now and improving it step by step would probably continie the mess we’re in now, design by commitee is slow and often leads to bad design.

Defining profoles/implementation-defined functions for the gl2 shader language can allow current hardware(GF3+,Radeon8500) to implement subsets of the shader language. But this should be an extention, not a part of gl2 because this solution will not be needed for the next generation of hardware. Using such subsets for older DX7(GF1,etc) hardware will be much too limiting. I would prefer to use vendor-specific register combiner-style shaders through a GL_GL2_shader_objects interface.

[This message has been edited by GeLeTo (edited 07-22-2002).]

I think that there could be implented GL2HLSL backend that uses NV_register_combiners…but how would it work, and mostly how fast would it be - no Idee…would probaly fallback to software at some points.

A agree that also hardware which does only support a subset of the ogl2 HLSL should also use this shader language. But I wouldn’t call that ogl2.

There should be only one ogl2 shader language which provides a full programmable vertex and fragment unit. To be called a ogl2 hardware, the hardware must support the complete ogl2 shader language. I think this is the only way to finish this mess of incompatible features. ogl2 should consequently targeting the aim that every shader runs on every hardware.

However, as transition period, previous hardware of course may implement a subset of the ogl2 HLSL by extensions, using the ogl2 shader interface. This makes it possible to take immediately advantage of the ogl2 HLSL. But it is clear that this is only a transition period. And such transitions periods shouln’t have negative impact on the future-oriented design of ogl2. Because of this, exposing partial ogl2 HLSL implementations shouldn’t be part of ogl2 itself, but exposed as extensions as usual.
The advantage is that there exists an common gl interface for defining and using shaders.

I think this is a backwards looking approach

God forbid that we might consider a solution that, while planning for the future, does not destroy the present. If OpenGL 2.0 is only useful for R300 or better cards, then nobody is going to switch to it for at least 3 years. Even today, you don’t see a lot of developers, even D3D developers, making significant use of DX8.0 programmability, and they have a standard language.

Above all, OpenGL 2.0 should be reasonably implementable in current hardware.

This gives exactly the mess of different vertex program / fragment program versions we already have.

You consider it a mess. I don’t. The actual mess is that these shaders are interfaced in several different ways. If we had a standard method of loading up a shader program, but with vendor-specific processing of that program, that would be relatively OK. At most, it requires writing shaders in a few languages, and doing a quick if-then test or two at bind-time (or, if you’re smart, at load time). Hardly a significant issue. No, the real problem comes in when EXT_vertex_shader’s interface looks so drastically different from NV_vertex_program.

because it is clear that hardware will support a full-programmable vertex/fragment unit anyway in near future

The “near future”? Precisely when is this? A year from now? Two years? What do you consider “full-programmable” anyway?

In any case, my proposal does not ignore the future. It provides for it, by having the ARB supply various extensions to the language that would be considered “standard” for GL 2.0 functionality. No one is forced to use them, and a program can easily test to see what functionality exists. When most of the hardware that is avaliable supports all of the ARB extensions, then GL 2.1 can require these extensions by making them part of the spec.

The only difference between this and the current GL 2.0 is that it provides for a reasonable ammount of backwards compatibility.

Korval, as mentioned in my previos post I agree with you completely that hw vendors should use parts of the ogl2 standard for today’s hardware. This especially includes also a standard interface for vertex / fragment program interfaces.

But I also think we are very close to a full-programmable hw anyway (as defined by the 3dlabs ogl2 HLSL proposal). After a short remaining transition period, development of new features in the vertex and fragment unit will be finished. In the same way as new CPUs don’t provide new real features anymore, but “only” better optimizations, accessable all the time by the same language C / C++. No chaos of different shader functionality any more. Since this will be the real quantum leap, OpenGL 2.0 should be designed to be the standardized interface for that.

As mentioned above, this means that also today’s hardware can use subsets of the OpenGL 2.0 standard already today. But I wouldn’t waste the name “OpenGL 2.0” for minor additional OpenGL shader features. I would use “OpenGL 2.0” for the quantum leap of full programmable hw as described above.

The P10 3dlabs hardware already has an full programmable fragment unit. The next generation hardware of NVidia and ATI include control flow instructions for vertex shaders but not for fragment shaders yet. So 3dlabs, NVidia and ATI have already done half of the missing work towards a full programmable hw. So I espect that the following hw generation of NVidia, ATI and 3dlabs will be full programmable. Maybe first prototypes in 9 month, on the market in 12 month? Perheps I am too optimistic. On the other hand, 3dlabs surprised me really by already presenting a full programmable fragment hardware already now. In the transition period, 3dlabs, NVidia and ATI can implement subsets of the OpenGL 2.0 shandard (which includes an standardized API for shaders).

Korval you probaly don’t know how HW looked when OGL1.0 was released.
Most of stuff was done in software.

And about that log example.
Who needs logATI, logNV and later logARB ?
The standard doesn;t say in which way you implement stuff.
The drivers implements it…in the way he want’s it - lookup tables or even fallback to software.

When OpenGL first appeared on consumer 3D hardware for PCs most of it was just broken or simply unsupported.

If OpenGL2.0 exposes no hardware limits probably exactly the same thing is going to happen. Companies will try to claim OpenGL2.0 compatibility for hardware which isn’t really up to the task, and it’s just all going to be broken.

Something much like this can already be seen in D3D where NVidia is claiming vertexshader support on hardware that doesn’t really support it -> it’s broken in many cases.