PDA

View Full Version : HLSL vs Cg = The Poll



pocketmoon
07-18-2002, 01:47 AM
So the ARB want our input!

I'm no expert in HLSL/Cg so perhaps brighter minds would like to use this thread to cover the basic differences for me?

I notice that both have built-in noise functions http://www.opengl.org/discussion_boards/ubb/smile.gif Perhaps Ken Perlin is about to become a rich man with his standardised noise funtion (Siggraph 2002) being implemented in hardware!


Rob.

folker
07-18-2002, 03:28 AM
We have played around with both of them. In my opinion, the relevant differences are the following:

a) Passing uniform and variant data:
3dlabs HLSL passes all data by global variables, both uniform and variant data in the same way, using declarators to declare uniform and variant variables.
Cg passes all arguments as function parameters. Variant data is handled different from uniform data: Variant data is passed as structure you have to declare before. Somehow the Cg approach seems to be more "low-level".

b) Standardization of build-in functions:
3dlabs HLSL yes, Cg no.
In Cg, build-in functions are part of the profile. Depending on the profile, different build-in functions are available. The idea of Cg profiles is to have different profiles for different hardware. So Cg profiles are somehow a new "driver layer" above OpenGL / Direct3D, exposing also different build-in functions to the shader program. For example, there are different texture lookup functions depending on the Cg profile you use. In practise this means that a Cg shader must be written for a particular profile to compile. Also if the hardware feature is the same, it is not guaranteed that different profiles provide this feature by the same function.
A 3dlabs HLSL shader source is always the same. Theoretically every 3dlabs HLSL shader also runs on every hardware / driver. But in practise it will happen that a shader fails to compile and run if it uses features which are not available on this hardware / driver.

PH
07-18-2002, 01:08 PM
The poll doesn't seem to reflect what has been discussed various places on The Net. I was under the impression that most people were looking forward to GL2.0 ( and especially the HLSL ). Currently Cg has 45% of the votes and 3Dlabs' GL2.0 HLSL has only 16%. Does anyone else find these numbers hard to believe ?

IT
07-18-2002, 01:34 PM
Hmmm, the results have changed dramatically since this morning.

folker
07-18-2002, 01:39 PM
I don't think that this poll is really designed to get a statistical feedback for the ARB (since everyone knows that such polls are not representative in any way and easy to cheat).

I'm quite sure that also the idea of this poll (as all polls on opengl.org) is to create attention to this topic. Compared to classical news, polls let the reader ask themself "what do I think about it?" and let them partipicate discussions (personally with other people, in the opengl.org discussion formums etc. etc.) And (technical) discussions are the most valuable input for the ARB.

Julien Cayzac
07-18-2002, 02:39 PM
Originally posted by folker:
I don't think that this poll is really designed to get a statistical feedback for the ARB (since everyone knows that such polls are not representative in any way and easy to cheat).


Agreed.

One should not forget Cg has been launched by what Dave called "The NVidia marketing machine", whereas GL2.0 HLSL has been less reviewed by end developers I think.

Asking "do you prefer a language with many demos and shaders sources available around there, or this new language?" is quite unfair and does not reflect the capabilities of both languages.

Not to talk about NVidia's IP on Cg...

Julien.

IT
07-18-2002, 04:02 PM
Well, it's 6:00pm Denver time and the tables have turned again. OGL2 on top right now.

Korval
07-18-2002, 05:02 PM
Cg does need a bit of cooking before it becomes a proper "high-level" language.

But, actually, allowing profile/implementation-defined functions is a pretty good idea. It would work well with the following conditions:

1) There is the notion of a standard shader. That is, all implementations must define some simple subset of the language. The standard should, perhaps, be very smiple (simple enough to go in a GeForce 1, possibly. That is, not requiring dependent texture access and so forth).

2) It is well understood that these extensions are not necessarily platform-neutral. They should be like regular OpenGL extensions.

3) It is understood that the ARB will increase the functionality of the language with their own extensions. If the basic standard doesn't support dependent texture accesses, then there should be an ARB_dependent_texture extension function that one can call from a shader to do the dependent texture addressing. This extension would be made avaliable to all who provide this functionality.

CopyCat
07-19-2002, 01:52 PM
Korval.
So you saying that Cg should be like a extension for vendor specyfic extensions?
The whole point of OGL2.0 is to standarize the stuff.
making another vendor specyfic stuff is pointless.

Korval
07-19-2002, 04:26 PM
The point I was making is that the language should allow you to querry which functions are avaliable on the given implementation. That way, not everybody is forced to implement a per-pixel "log" function; if someone does, I can use it if I want.

Yes, this is much like vendor-specific extensions today. That's why the 3rd part is important; the ARB will frequently standardize new functionality. Taking the "log" function as an example, the ARB would quickly standardize "logARB" and each individual implementation would be able to test to see if said functionality is avaliable. That way, we won't be seeing "logNV" and "logATI" unless they implement theirs in a way that isn't in line with what the spec says that "logARB" should behave. It still allows for "logNV" and "logATI", but those are mainly for functionality that isn't properly implemented yet.

The reason the current state of affairs is so bad is because the interfaces to everything are so wildly different. Look at NV_vertex_program and EXT_vertex_shader: same functionality, wildly different interface. This type of extensible shader solves this problem. Not only that, there is a basic subset of required functionality in this paradigm that an implementation must implement.

CopyCat
07-20-2002, 01:51 AM
Maybe.
But that eventualy would lead to vendor specyfic stuff.
And then just to the mess like today - multiply cold paths.
And look at the ARB.
It does work very slow - stalled by all those pesky IP problems etc.

folker
07-21-2002, 04:51 AM
About implementation-dependent functions: I think this is a backwards looking approach: You try to both support all past and existing hardware. This gives exactly the mess of different vertex program / fragment program versions we already have.
I think a future looking approach should avoid this path.

An important aspect is the following: Current hardware is not far away from the highest possible level of a fully programmable vertex / fragment unit anyway. Basically only dependent texture access and real jump / loops are missing. But this will be soon supported by future hardware anyway. But then we have reached our goal, the features of the vertex / fragment unit are fixed (in the same way as the Renderman HLSL is basically fixed and not envolving anymore), and future hardware will "only" improve performance.

Thus, because it is clear that hardware will support a full-programmable vertex/fragment unit anyway in near future and this functionality won't change anymore, we definitely should define that as standard for a forward-looking API like OpenGL 2.0.


[This message has been edited by folker (edited 07-21-2002).]

harsman
07-22-2002, 01:25 AM
I agree that there should be a forward looking shading language that standardizes the interface to future hardware. However, having different profiles might still be useful, as long as there is a forward looking standard OpenGL 2.0 profile. That way, you have a forward looking standard but it's easier for vendors to support older hw. Having a GeForce-level standard now and improving it step by step would probably continie the mess we're in now, design by commitee is slow and often leads to bad design.

GeLeTo
07-22-2002, 02:52 AM
Defining profoles/implementation-defined functions for the gl2 shader language can allow current hardware(GF3+,Radeon8500) to implement subsets of the shader language. But this should be an extention, not a part of gl2 because this solution will not be needed for the next generation of hardware. Using such subsets for older DX7(GF1,etc) hardware will be much too limiting. I would prefer to use vendor-specific register combiner-style shaders through a GL_GL2_shader_objects interface.

[This message has been edited by GeLeTo (edited 07-22-2002).]

CopyCat
07-22-2002, 03:12 AM
I think that there could be implented GL2HLSL backend that uses NV_register_combiners...but how would it work, and mostly how fast would it be - no Idee....would probaly fallback to software at some points.

folker
07-22-2002, 03:41 AM
A agree that also hardware which does only support a subset of the ogl2 HLSL should also use this shader language. But I wouldn't call that ogl2.

There should be only one ogl2 shader language which provides a full programmable vertex and fragment unit. To be called a ogl2 hardware, the hardware must support the complete ogl2 shader language. I think this is the only way to finish this mess of incompatible features. ogl2 should consequently targeting the aim that every shader runs on every hardware.

However, as transition period, previous hardware of course may implement a subset of the ogl2 HLSL by extensions, using the ogl2 shader interface. This makes it possible to take immediately advantage of the ogl2 HLSL. But it is clear that this is only a transition period. And such transitions periods shouln't have negative impact on the future-oriented design of ogl2. Because of this, exposing partial ogl2 HLSL implementations shouldn't be part of ogl2 itself, but exposed as extensions as usual.
The advantage is that there exists an common gl interface for defining and using shaders.

Korval
07-22-2002, 09:42 AM
I think this is a backwards looking approach

God forbid that we might consider a solution that, while planning for the future, does not destroy the present. If OpenGL 2.0 is only useful for R300 or better cards, then nobody is going to switch to it for at least 3 years. Even today, you don't see a lot of developers, even D3D developers, making significant use of DX8.0 programmability, and they have a standard language.

Above all, OpenGL 2.0 should be reasonably implementable in current hardware.


This gives exactly the mess of different vertex program / fragment program versions we already have.

You consider it a mess. I don't. The actual mess is that these shaders are interfaced in several different ways. If we had a standard method of loading up a shader program, but with vendor-specific processing of that program, that would be relatively OK. At most, it requires writing shaders in a few languages, and doing a quick if-then test or two at bind-time (or, if you're smart, at load time). Hardly a significant issue. No, the real problem comes in when EXT_vertex_shader's interface looks so drastically different from NV_vertex_program.


because it is clear that hardware will support a full-programmable vertex/fragment unit anyway in near future

The "near future"? Precisely when is this? A year from now? Two years? What do you consider "full-programmable" anyway?


In any case, my proposal does not ignore the future. It provides for it, by having the ARB supply various extensions to the language that would be considered "standard" for GL 2.0 functionality. No one is forced to use them, and a program can easily test to see what functionality exists. When most of the hardware that is avaliable supports all of the ARB extensions, then GL 2.1 can require these extensions by making them part of the spec.

The only difference between this and the current GL 2.0 is that it provides for a reasonable ammount of backwards compatibility.

folker
07-22-2002, 12:03 PM
Korval, as mentioned in my previos post I agree with you completely that hw vendors should use parts of the ogl2 standard for today's hardware. This especially includes also a standard interface for vertex / fragment program interfaces.

But I also think we are very close to a full-programmable hw anyway (as defined by the 3dlabs ogl2 HLSL proposal). After a short remaining transition period, development of new features in the vertex and fragment unit will be finished. In the same way as new CPUs don't provide new real features anymore, but "only" better optimizations, accessable all the time by the same language C / C++. No chaos of different shader functionality any more. Since this will be the real quantum leap, OpenGL 2.0 should be designed to be the standardized interface for that.

As mentioned above, this means that also today's hardware can use subsets of the OpenGL 2.0 standard already today. But I wouldn't waste the name "OpenGL 2.0" for minor additional OpenGL shader features. I would use "OpenGL 2.0" for the quantum leap of full programmable hw as described above.

The P10 3dlabs hardware already has an full programmable fragment unit. The next generation hardware of NVidia and ATI include control flow instructions for vertex shaders but not for fragment shaders yet. So 3dlabs, NVidia and ATI have already done half of the missing work towards a full programmable hw. So I espect that the following hw generation of NVidia, ATI and 3dlabs will be full programmable. Maybe first prototypes in 9 month, on the market in 12 month? Perheps I am too optimistic. On the other hand, 3dlabs surprised me really by already presenting a full programmable fragment hardware already now. In the transition period, 3dlabs, NVidia and ATI can implement subsets of the OpenGL 2.0 shandard (which includes an standardized API for shaders).

CopyCat
07-22-2002, 12:23 PM
Korval you probaly don't know how HW looked when OGL1.0 was released.
Most of stuff was done in software.

And about that log example.
Who needs logATI, logNV and later logARB ?
The standard doesn;t say in which way you implement stuff.
The drivers implements it...in the way he want's it - lookup tables or even fallback to software.

Jurjen Katsman
07-22-2002, 12:42 PM
When OpenGL first appeared on consumer 3D hardware for PCs most of it was just broken or simply unsupported.

If OpenGL2.0 exposes no hardware limits probably exactly the same thing is going to happen. Companies will try to claim OpenGL2.0 compatibility for hardware which isn't really up to the task, and it's just all going to be broken.

Something much like this can already be seen in D3D where NVidia is claiming vertexshader support on hardware that doesn't really support it -> it's broken in many cases.

Korval
07-22-2002, 12:56 PM
In the same way as new CPUs don't provide new real features anymore, but "only" better optimizations

Since when did CPU's not provide new features? Doing Sin/Cos as an opcode is not something that all CPU's provide. Vector-math (MMX, 3DNow, SSE, etc) is, also, a new feature. CPU's are constantly evolving new features.


No chaos of different shader functionality any more.

That's not going to happen in a year. There are still features that a few vendors have (per-pixel math operations like sin, log, etc) that others won't, thus creating a dicotomy of languages.

And is there really a problem with having two slightly different feature-sets in the shader that are querryable? As long as the interface is the same, I can write different shaders easily enough. As someone mentioned, writing shaders is only a small part of writing any rendering system. It is hardly terrible that I may need to write several shaders for various hardware. As long as the interface to these shaders is the same, there isn't much of a problem.

folker
07-22-2002, 01:00 PM
Originally posted by Jurjen Katsman:
...

There already was a long and deep discussion about exposing hardware limits in this discussion forum.

Jurjen Katsman
07-22-2002, 01:02 PM
Folker: As that discussion seems to continue here (features, hardware limits, same thing), I feel justified commenting on it.

folker
07-22-2002, 01:11 PM
Originally posted by Korval:
...

Sin/cos/log are no new features, they are only performance optimizations, because sin/cos were also calculated on older CPUs by sub-routines. The same is possible for GPUs. Maybe first GPU implementations provide only very inaccurate implementations of sin/cos/log, e.g. a simple quadratic approximation for sin/cos. But I don't see a problem there. So I don't think that sin/cos/log are a real problem.

I agree with you that the first important step is a standard shader interface. This will make life a lot of easier, agreed completely. However I think having to support several shader code pathes is indeed ugly. Our experience is that it costs similar or even more development resources than developing rendering code both for OpenGL and Direct3D. And if different hardware only differs in "slightly different feature sets", then it should be no problem for the hw vendors to implement one standard.

But this will only happen if there is one common goal. Otherwise, every hardware vendor will develop into a different direction. "It would be easy to implement, but we think that log is not important, so we won't support it." In Direct3D, every hw vendor is forced somehow to support the DX vertex / pixel program standard defined by microsoft. For OpenGL, OpenGL 2.0 could define an aim in a similar way. So the hw vendor will implement log to be able to call its hardware OpenGL 2.0 compliant.

folker
07-22-2002, 01:27 PM
Originally posted by Jurjen Katsman:
Folker: As that discussion seems to continue here (features, hardware limits, same thing), I feel justified commenting on it.

I think there are important differences between hardware limits and (other) features.

First, in contrast to features, hardware limits are hard to define (see the asm vs. HLSL discussion. At the end only possible solution: A gl-function can-this-shader-be-executed-questionmark.) Second, probably the step to support all features of a full programmable gfx hw is smaller than support of unlimited resources.

So I think in practise there are three steps:
a) A standard interface for shaders. Is useful immediately also for todays hardware.
b) A standard full-programmable standard shader langugae. Will be naturally supported by hardware in near future. Currently only missing hw features are flow-control and generic texture lookup (sin/cos/log etc. are sub-routines).
c) No hardware limits. Possible, but requires additional work (e.g. f-buffer). See the long previous discussions about this topic. But maybe this problem vanishes automatically if future hardware has limits which are equal to infinity in practise.

These steps are somehow natural. The "only" question is: What should be a required feature for "OpenGL 2.0 compliant", and what should be left to OpenGL 2.0 sub-functionality extensions (compatible to full OpenGL 2.0). In my opinion, OpenGL 2.0 should require all of them. If every hw vendor cries "c) is not possible!!!", then - but only if there is really no other way - c) OpenGL 2.0 should require only a) and b).
As mentioned, at the end this is only a naming question what you want to call "OpenGL 2.0". But since b) is already quite near, and OpenGL 2.0 should be future oriented and define the direction for the future (instead of focusing too much on current hardware), I would suggest a hw can be called OpenGL 2.0 only if it supports all a), b) and c). At minimum a) and b).

For example:
a) alone can be called OpenGL 1.5
a) together with b) can be called OpenGL 1.6
a) and b) and c) is called OpenGL 2.0.

folker
07-22-2002, 01:32 PM
Some additional note:
I think it is a good idea if OpenGL 2.0 sets a vision / direction for the future, instead of only standardizing existing features (like the two-year-old ARB_VERTEX_PROGRAM). I think setting a standard for the future reflects exactly the spirit of OpenGL. And this was the reason why OpenGL didn't have to change for so a long time, whereas D3D changed its architecture again and again in the meanwhile.

So I think it is important that OpenGL 2.0 sets a standard for the future, instead of only "not ignoring" the future.

dorbie
07-22-2002, 01:34 PM
Didn't the minutes of the last ARB meeting record that NVIDIA weren't offering Cg to the ARB as part of OpenGL?

CopyCat
07-23-2002, 01:35 AM
BTW. Why are we talking about modern HW ?
And today's HW difrences.
A standard isn't done like days, or even months.
It will probaly take at least 1-2 years till OGL2.0 will be aproved and we'll have first implentations.
And then probaly all modern mainstream HW will be OGL2.0 compatible.
Then noone will think about GF1 or GF2 ( look at modern games....gf1 and gf2 isn't enough for them....and gf3 is like minimum ).
Till then most vendors will be probaly able to support 100% OGL2.0.

And remember we speak here about HLSL - not a SIMD language in NV_vertex_program !.
The lang itself will do compile-time optimizations etc.

Thaellin
07-23-2002, 04:47 AM
Originally posted by dorbie:
Didn't the minutes of the last ARB meeting record that NVIDIA weren't offering Cg to the ARB as part of OpenGL?

That's what I remember reading, as well. I'm confused. I guess nVidia changed their minds? Or would they be retaining ownership of the language specification and giving a license for usage by those implementing OpenGL?

-- Jeff

Cab
07-23-2002, 08:01 AM
http://biz.yahoo.com/prnews/020723/sftu012_1.html

Thaellin
07-23-2002, 09:05 AM
That press release just talks about the compiler receiving an open-source treatment. In order to become part of the OpenGL 2.0 specification as the standard HLSL, I believe nVidia would be required to give up control over the /language specification/, not just the compiler source.

The language specification has never been mentioned as part of the package that nVidia would be releasing control of, and ARB notes show nVidia reviewing the CG language while specifically stating that they were not offering it to the ARB for consideration.

That is the part I'm curious about.

It's good to know the compiler source has been released, though - it /should/ allow any individual to write back-end profiles for whatever shader language they'd like to have come out of the compiler...

-- Jeff

Korval
07-23-2002, 10:45 AM
Why are we talking about modern HW ?
And today's HW difrences.
A standard isn't done like days, or even months.
It will probaly take at least 1-2 years till OGL2.0 will be aproved and we'll have first implentations.
And then probaly all modern mainstream HW will be OGL2.0 compatible.

By modern, of course, you mean "cutting edge $400+ card" rather than "mainstream HW that 80% of the gaming public has in their machines." I'm not interested in an API that doesn't support the mainstream hardware.

In one to two years, mainstream hardware will be DX8.0/8.1 cards (low-end GeForce3's and Radeons). That means OpenGL 2.0 will be absolutely, totally, and in all other ways useless to us game developers unless it can actually make use of the features of those cards.

Most modern games are built assuming the user has a GeForce1/2. And yes, they are enough to run them at reasonable (but not at 1280x1024 or 1600x1200 like most benchmarks try to). GeForce3 is hardly a minimum for the vast majority of games, unless you need 100+fps.

Mainstream tends to lag behind current hardware by a good two years. When the GeForce 3 came out, most games were being developed assuming that the user had a TNT2 of some kind. When GeForce 5 comes out, you're looking at the GeForce2MX as a base. When GeForce 7 comes out (with full GL2.0), you're looking at GeForce 3's being prevelant. Few are the game developers that are going to waste their time with GL 2.0 when it will only touch a small fraction of the installed base. It's just rediculous to develop an API/language solely for the purpose of supporting hardware that:

1: Doesn't even exist today
2: Won't be mainstream for the next 3-4 years.

It's easy for us programmers to go off and buy a R300-based card and assume that this is what everybody has. But that's simply not the case, and game developers know it. The ARB needs to understand that whatever they develop must be backwards compatible to the level of the GeForce3. Otherwise, the gaming community (minus a few hardheads like Carmack) will abandon GL as a game-programming language.

davepermen
07-23-2002, 10:58 AM
Originally posted by Korval:
In one to two years, mainstream hardware will be DX8.0/8.1 cards (low-end GeForce3's and Radeons). That means OpenGL 2.0 will be absolutely, totally, and in all other ways useless to us game developers unless it can actually make use of the features of those cards.

and in one or two years dx8 / dx8.1 games will be out everywhere. and in one or two years people start developing games for in from now on 4 to 6 years. and then dx9 is older standard.

gl2 should NEVER NEVER NEVER NEVER be a standard with thousand version fallbacks for older hw. it should draw a final line, finishing all that mess. not important if it is mainstream in 4 years from now, or 5. till then nvidia is there and feeds us with new versions of cg all half year. gl2 wants to standartise shaders. thats not possible with current and even not yet with the nextgen hw, even while they are very powerful.

gl1 was a standart for highend pc's, not for gamers.

folker
07-23-2002, 02:42 PM
Originally posted by davepermen:
gl2 should NEVER NEVER NEVER NEVER be a standard with thousand version fallbacks for older hw. it should draw a final line, finishing all that mess. not important if it is mainstream in 4 years from now, or 5. till then nvidia is there and feeds us with new versions of cg all half year. gl2 wants to standartise shaders. thats not possible with current and even not yet with the nextgen hw, even while they are very powerful.

Where can I sign your petition? http://www.opengl.org/discussion_boards/ubb/wink.gif

Korval
07-23-2002, 04:12 PM
gl1 was a standart for highend pc's, not for gamers.

And the only reason OpenGL was adopted for games in any way was because it was superior to the jumbled mess of trash that was D3D 3.0. Had Microsoft done a decent job with D3D earlier, they wouldn't have to contend with GL as competition to D3D.

Precisely what is so bad about having backwards-compatibility extensions in the shader language? What is so terrible about designing the language to be useful in writing GeForce3 shaders? It doesn't hurt the language in any way to do so, because they are all extensions that one can choose not to use. The only thing it does is make GL 2.0 more inclusive to hardware rather than exclusive.


gl2 wants to standartise shaders. thats not possible with current and even not yet with the nextgen hw, even while they are very powerful.

And, yet, D3D 9 seems to have done a reasonable job of it. And I'll bet that D3D 10 does a reasonable job of standardizing its additional features, too.

MikeC
07-23-2002, 04:16 PM
Originally posted by folker:
Where can I sign your petition?

Amen, brother.

I really don't understand the "must work on current mainstream hardware" viewpoint. Neither side disputes that generalized programmability is the way to go. Future hardware will be able to do that. Current hardware can't, for varying values of "can't". Cut the pie any way you like, the changeover is going to be messy. So what? GL2 is going to be backward compatible. For a few years folks will be writing engines with an all-singing all-dancing GL2 backend and fallback extended-GL1.x backends, just as they write NVGL and ATiGL and fallback standard GL backends today. There's no need to bastardize GL2 when extended GL1.x exposes the current idiosyncratic functionality as cleanly as it can be exposed. It's ugly but it won't last for ever.

GL2 is supposed to be the light at the end of the tunnel. If it's just going to be another mess of vendor-specific subsets when we get there, what's the point?

IT
07-23-2002, 05:07 PM
I agree MikeC.

Keep OGL2 clean and get it out now. This way, in 2-3 years, however long it takes to make a game, the mainstream will have the hardware to support it, and the games should have less bugs because effort is put into the actual game instead of debugging graphics and stuff for 25 versions of hardware extensions out there.

If a clean slate isn't used, then things keep staying like that one golfing joke (for those who've heard it): hit the ball, drag Fred, hit the ball, drag Fred ...