PDA

View Full Version : Portability



John Kessenich
04-13-2004, 11:46 AM
Hi,

As an author of the OpenGL Shading Language Specification, I'm concerned about its portability.

One thing the specification says regarding the motivation of a high-level language is
A desire to expose the extended capability of the hardware has resulted in a vast number of extensions being written and an unfortunate consequence of this is to reduce, or even eliminate, the portability of applications, thereby undermining one of the key motivating factors for OpenGL. Now, a portable language is being introduced to address this problem. But, if it is immediately loaded with a lot of vendor specific extensions, then we end up in the same position we started, regarding difficultly managing a lot of extensions and lack of portability.

If a vendor has new graphics hardware ability that requires an actual change to the basic semantics of this language (rare), then changing the language with an extension to support this new feature makes sense. But, if the language is freely changed for reasons other than new feature support, that's something that will hurt everyone, without anyone seeing a better picture on the screen.

I hope this community will demand that their vendors provide portable versions of the language ensure extensions have significant value not otherwise attainable with the portable version of the language.
Thanks,
JohnK

Tom Nuydens
04-13-2004, 11:47 PM
My main gripe with this situation is not that NVidia extended the language -- I see absolutely nothing wrong with that. The only condition, of course, is that they provide an actual OpenGL extension spec (e.g. GL_ARB_shading_language_101 or whatever), and that this specification clearly documents the added functionality. Maybe this is what EXT_Cg_shader does, but unfortunately the spec to that hasn't been made available yet.

Of course I would still need a way to switch the added stuff off. For compatibility's sake, I need to be able to specify which version of the language spec I want my shaders to compile against. We had this with the "!!ARBfp1.0"-style header in the "old" extensions, we should have it in GLSL as well (e.g. in the form of a preprocessor directive). If this directive is absent, my personal preference would be to default to the unextended specification, but I'm not fussy about this.

It's important that vendors agree on the form of these directives, though. I don't want to have to add one directive that tells ATI's compiler to adhere to the GLSL version 1.0 spec, and second one to tell NVidia's compiler to do the same thing. Implementations must also respect the directives when they are there. We don't need things like the ARB_fp shadow mapping disaster to happen again.

-- Tom

Mazy
04-14-2004, 12:08 AM
The switch should be to swich extended stuff on. and default it should conform to a spec. And I agree on that you should only add things that makes a real difference for the underlaying HW. Why add typecast like (vec3)variable, when vec3(variable) exists? seems only confusing. But adding a refract function may be efficient since its possible to let the HW calculate that instead of doing that in multiple instructions. (still it should be enabled by a #pragma extended_set_name )

And when where at it, it should be nice to specify which version of glsl you want to compile against. When opengl extends to _102 or similar, it would really help developers to specify that we want to test this particular shader against a _100 compiler without having to reinstall old drivers. Maybe that shouldn't be a part of the driver, but in that case we need an offline compiler that can test the shaders for different versions.

Cab
04-14-2004, 12:43 AM
John Kessenich, I agree with you.

In this case, there are things in the NVIDIA extension that I like as the inclusion of the half type where you can use things like:
#ifndef half
#define half float
#endif
#ifndef half2
#define half2 vec2
#define half3 vec3
#define half4 vec4
#endif

So you donít have problems with other hw.

But the other things like casting or the integer to float promotion are things that I prefer not having them in.
Other things like alias varying variables are, in my opinion, just wrong.

Each C/C++ compiler has its own specifics. But they have options to disable language extensions (for example, in Visual Studio you can find this option in project properties->C/C++->Language) and/or to show portability issues.
In my opinion, because of the nature of glsl that are compiled/linked in each machine, those options should be enabled by default and there should be something like #pragma warning(disable:GLSL_portability) to disable them.
This way, if you are aware that you are using those language extensions but you donít worry about, you will not see those warnings in your log.

Hope this helps.

Jan
04-14-2004, 06:15 AM
I agree with all of you.
I think Mazyīs idea is very good. At the moment we only have _100, but that will change some day and there should be a way to handle this.

However i donīt think, that on default the compiler should spit out warnings, i think it should simply not compile! A warning is still a thing one needs to check.
In my opinion a good compiler should
a) by default not compile a wrong shader
b) put out warnings, if you use an extended version but want to test for compatibility
c) only compile without warnings if you explicitly state, that you use an extended version

Jan.

V-man
04-14-2004, 07:17 AM
It is quite obvious this thread and others are stemming from the "Tom's demo" thread.


Originally posted by John Kessenich:

I hope this community will demand that their vendors provide portable versions of the language ensure extensions have significant value not otherwise attainable with the portable version of the language.
Thanks,
JohnK1. Agreed, but the flexibility the current NV implementation offers is nothing to cry about. It's a flexibility, a bonus, an add-on, an extra ...
doesn't need an extension at all.
2. I specially agree with the second point.
If too many extensions is released in a short time, it will be ugly. I'm assuming you mean vendor specific extension. Also, I find it ridiculous to have extensions that simply enable us to write code slightly differently.

As for what Tom said, I was thinking the same thing. How do you specify the version? It's possible to have a GL function that sends a command line string.

evanGLizr
04-14-2004, 09:12 AM
Originally posted by Cab:
But the other things like casting or the integer to float promotion are things that I prefer not having them in.

I, for one, think that without integer to float promotion, the inability to compile pow(3.0f, 16) or 25.0f * 4 is just lame, specially since more complex things like float * vec3 are allowed.

So either you extend all the operations to be defined when you operate floats and ints (nice combinatorial explosion) or you allow promotion. I would go for the promotion.

Regarding the portability issues, in my book current NVIDIA's implementation of GLSL is outright wrong. I don't care if it's more functional or better, if you digress from the standard it's no longer the standard, it's a different beast.

I'm ok with any extensions they want to add and they can switch them on in any way they want: glHints, #pragmas or whatever, but invalid programs should not compile by default.

bobvodka
04-14-2004, 11:46 AM
Originally posted by V-man:
1. Agreed, but the flexibility the current NV implementation offers is nothing to cry about. It's a flexibility, a bonus, an add-on, an extra ...
doesn't need an extension at all.
but it also makes the code unportable and undermines the whole point of the GLSL.
Look at it this way, i write some GLSL code on my 9800xt, it compiles and works and looks pretty. I then send you the program (assuming you have a GFFX), you run it and tada, it all works and we both go 'ooooo, isnt it pretty'.
Then, you write a shader using the current drivers with their 'added bonus of flexibility', it it runs and looks pretty on your machine, you then send it to me, i try to run it on my 9800xt and it goes 'arrgh, its wrong!' at me at which point you either have to (a) fix it or (b) write a new version for all the ATI users out there, both of which cost you time and causes you effort instead of just following the specs and writing proper code like you should have done.
This is why its a bad bad idea to allow this stuff past by default.

Mazy
04-14-2004, 01:00 PM
evanGLizr : 2.2*2 = 4.4 or 4.0?

float * float always ends up with a float.
float * int must specify a result.

so instead the spec says that float must be written as 1.0.. Java has the same spcification, and i dont believe that someone made a relaxed java compiler?

kingjosh
04-14-2004, 01:45 PM
I, for one, think that without integer to float promotion, the inability to compile pow(3.0f, 16) or 25.0f * 4 is just lame, specially since more complex things like float * vec3 are allowed.
A vec3 is a 3 component vector of floats! Of course this should work. If your code was float * ivec3 it would not compile. I think the main issue here is an integer looks like "1" while a float looks like "1.", "1.0", etc. An integer is not a float, and a float is not an integer. Why intentionally add ambiguity to a new language?

PanzerSchreck
04-14-2004, 01:46 PM
I backup evanGLizr's statement. Especially if you're used to a language like Delphi that is smart and automatically uses the datatype that fits an operation, it's in my eyes totally annoying that you have to do e.g. an explicit 1.0 * 4.4 to get a float. It would be much better for the workflow if you could just write 1*4.4 end the compiler determines that this'll only fit into a float and not an int.
So how smart can a compiler be if smthing like MyFloat := 1 * 4.4 results in an error? If the compiler knows what datatype the result has in that an operation gets written, then why not let it extend boths operators?

And as far as any vendor extending the glsl via own extensions : I don't like it. Especially if you're a hobbyist and you write shaders on HW that supports those extensions, you'll get problems on other HW. And especially as NV has their cG, I don't really unterstand why they feel the urgent need to "fix" (in my opinion make it worse) the glsl. If they don't like it, then why don't they just implement GLSL as it is in their drivers and do additional stuff in their own ("oh so much better...") shading language? That's so great about a standard. You do it on one graphicscard and it'll work on all others (that support the standard), but if one vendor throws in new features, then that standard will get (mostly over the time) more and more loose and fall apart. In my eyes NVidia is going the wrong way, it's kind of like extending a DIN A4-paper some mms, so that you can write some more letters on it...sure it's a "feature", but if you get used to it you'll get problems when you'll have to return to the standard.

Just to sum it up : If the change introduced into glsl by a new extension isn't a huge leap, then I'm strictly against that extension.

Korval
04-14-2004, 03:37 PM
As an author of the OpenGL Shading Language SpecificationAhh, so you're the one responsible for some of the nonsense in the language ;)


An integer is not a float, and a float is not an integer. Why intentionally add ambiguity to a new language?You mean, besides the fact that virtually all programming languages allow for promotion of what are numbers? The only reason that there is a difference between an int and a float is because floats sometimes have problems with getting integral precision.

The numeric value "1" is functionally identical to the numeric value "1.0". With the exception of scientific precision, these refer to the same number. As such, when we use them in code, we would like code to understand that there is no functional difference between a literal number times any numeric type.

In short, numbers should not have type until they need to. Once they do, the type should be chosen in the most reasonable fashion possible, ie the one that allows the code to compile to the programmers intent. If I write "1 * float", we all know what I'm talking about. Every C, C++, C#, Pascal, Java, BASIC of all forms, etc compiler knows what I'm talking about. Why can't glslang? Is it just too obvious for you what the correct behavior ought to be?

Not allowing automatic promotion of literals is just nonsense. A case can be made for actual integer values (ie, requiring explicit cast/constructor operations), but literals do not have, nor need, types. It is only in the context of their use that their type needs to be determined.

This doesn't make nVidia right, btw. Slipping this functionality in is still wrong. They just have a good point about the language being poorly designed.

evanGLizr
04-14-2004, 04:18 PM
Originally posted by kingjosh:

I, for one, think that without integer to float promotion, the inability to compile pow(3.0f, 16) or 25.0f * 4 is just lame, specially since more complex things like float * vec3 are allowed.
A vec3 is a 3 component vector of floats! Of course this should work. If your code was float * ivec3 it would not compile. I think the main issue here is an integer looks like "1" while a float looks like "1.", "1.0", etc. An integer is not a float, and a float is not an integer. Why intentionally add ambiguity to a new language?Ambiguity? what ambiguity?
float * int ~= float + float + ... + float
Now tell me what ambiguity is there in adding a float an integer number of times?

The same goes for pow(float, int) :
pow(float, int) ~= float * float * ... * float

Where do you see the ambiguity?

barthold
04-14-2004, 06:47 PM
Originally posted by Korval:

As an author of the OpenGL Shading Language SpecificationAhh, so you're the one responsible for some of the nonsense in the language ;)


An integer is not a float, and a float is not an integer. Why intentionally add ambiguity to a new language?You mean, besides the fact that virtually all programming languages Just to clarify, JohnK did not write the text in the second quote. Someone else did.

Here's some history:

We did discuss auto-promotion for GLSL, and decided not to push for it in this release. It is a difficult topic to get specified right, and it was decided to postpone it so that GLSL could be released in a timely manner. JohnK has already stated in a different thread that he would support some auto-promotion. The simple fact is that it has not been thought through enough yet to be comfortable it can be added in such a way we are not going to regret it later.

Barthold

Mazy
04-15-2004, 12:39 AM
after a good night sleep i come tho this conclusion..

Extending the spec should be ok ( and handled according to the result of the poll i guess)
but changing the languages SYNTAX shouldnt be ok.

That means right now that no casting and no implicit type conversion should take place, but the refraction function, and even an extra function named frac() actually should be ok ( even if i dont see the point of it :) )

Just wanted to clearify my point noe when i got a grip of it myself.

harsman
04-15-2004, 02:57 AM
So what if someone wants to extend the language with a 16-bit floating point type? Or any new type for that matter, double precision floats perhaps? Is that a change of syntax?

Actually, as a side note, a lower precision floating point data type would be nice to have since it would reduce memory footprints in certain situations. Performing computation in lower precision isn't very interesting (except for fixed function stuff like blending and filtering, but that doesn't affect glslang), but as a storage format I think it's useful.

By the way, I agree that implicit promotion of integer literals to floating point should be done. Do you 3DLabs guys have any concrete examples when you think this would be dangerous?

Mazy
04-15-2004, 03:20 AM
adding types are syntax change, sinde new types might add noew behaivor for operators, but double and half is considered in the original spec as reserved keywords.

And there I also wonder what they where thinking about since you can consider them as hints unless you actually can use them in the HW. what problem should it be for ati to just 'read' hvec4 as a normal vec4 in their compiler?
As long as the hint behaivor is defined in the spec and that you shouldnt depend on the number of bits in precision you actually get, or that you could query the current implementations format with glGet* for the different types it should all be fine.

For the implicit conversion you always get a problem with dividers, atleast the existing languages have different approaches to decisde if its a integer div or a float div.

C-languages and java uses the syntax of, if any of the numbers are forced float, the divide will be float, and Delphi have an own keyword for integer divs. but i guess there can be other problems as well. The key issue here is, should you change this in just one implementation, or should you try to affect the spec, so glsl behaives the same on all implementations?

ScottManDeath
04-15-2004, 06:25 AM
IMO auto promotion is not really necessary. When I write C++ code I always do proper casting and number postfixes because I hate it when my compiler gives me warnings about possible loss of precision ;) , so I explicitly state what I want. This way, I and my compiler have the same understanding of what's going on there. So I could live without it.

Copy & Paste of a current project.

for(int i=0; i < m_bitmap.Width; ++i)
for(int j=0; j < m_bitmap.Height; ++j)
{
float p = (float)i/(float)m_bitmap.Width;
BYTE c = (BYTE)(p*255.0f);
}But otherwise why not allow it and make those who think it is necessary happy?

As barthold said, it was not ready at in the time frame. So at least we have somet GLSL to play with at the moment....

<Off Topic>
I would prefer to have the types like vec2 ivec3 renamed to float2 and int3 because this corresponds better to my personal understanding of nice syntax ;)

kingjosh
04-15-2004, 06:49 AM
Ambiguity? what ambiguity? One entry found for ambiguous.

Main Entry: am∑big∑u∑ous
Pronunciation: am-'bi-gy&-w&s
Function: adjective
Etymology: Latin ambiguus, from ambigere to be undecided, from ambi- + agere to drive -- more at AGENT
2 : capable of being understood in two or more possible senses or ways

John Kessenich
04-15-2004, 07:22 AM
The danger and ambiguity in doing autopromotion is most clear when doing signature matching to overloaded functions. A general autopromotion capability in the language must address these ambiguities, preferably with a simple specification that's easy for everyone to implement correctly and doesn't create difficult to find defects due to the wrong function being silently called.

This suggests a more limited approach. It seems harmless enough to state that autopromotion occurs with literals in simple expressions. But, care must be taken as to where to draw the line.

This also relates to why types like half are not trivially put in the language as a hint: a complete consistent set of promotion rules for half were never laid out, so it remained reserved.

JohnK

mrbill
04-15-2004, 08:41 AM
Originally posted by evanGLizr:
I, for one, think that without integer to float promotion, the inability to compile pow(3.0f, 16) or 25.0f * 4 is just lame...pow is an excellent example of ambiguity.

pow of a negative base is undefined.

A shaderwriter could provide their own overloaded function to define pow with a negative base *when* the exponent is an integer:


vec4 pow( const in vec4 base, const in int exponent );
// ...
vec4 base = gl_MultiTexCoord0;
vec4 r = pow( base, 2 ); // built-in pow, or user-defined pow, which is it????I find it lame to disambiguate. You find it lame not to autopromote.

But that's not the point. One decision leads to portable code *now*, and also leads to code *that* *is* *forward* compatible!
Thought experiment. What happens if the ARB adds a new built-in function somedaygenType pow( const in genType base, const in int exponent ). Ooops.

Bottom line - if it hurts when you do it (my shader ain't portable no more for no good reason) then don't do it.

-mr. bill

Adruab
04-15-2004, 09:32 AM
I am somewhat confused about the ambiguity of autopromotion as well. I mean normally if something is ambiguous (in C++ at least), you get a compiler error. Now it MIGHT be a problem if people are providing libraries that wouldn't have been provided before (new functions through #include like thing). But that's always been a danger, and I think an acceptable one, given the policies in C++.

As for difficulty of autopromoting integer variables. Wouldn't this only be a problem if the hardware implemented a seperate integer pipeline. As is the integer spec was limited to 16 bits so they could fit inside a float right? I think it's probably safe to assume any conversion stuff would be supported on compilers for hardware which supported crazy wacko things (insert a cast to whatever their type is).

Adruab
04-15-2004, 09:54 AM
My bad, yeah, sticking new stuff directly into the language circumvents the #include concept (and the knowledge of what you're dealing with), and seems wrong to me.

I saw some stuff earlier about using pragmas. Could extensions (including acceptable syntax extensions) be allowed only when #pragma using extension or something? Personally having the ability to have extensions in the shading language is nice.

I also like the whole idea of versioning. Is there any sort of stuff in GLSL for that? I suppose it could be added on top using something like a D3D effect file, but it would be nice to have some sort of support in there for that. What do other people think?

John Kessenich
04-15-2004, 10:06 AM
Originally posted by Adruab:
I am somewhat confused about the ambiguity of autopromotion as well. I mean normally if something is ambiguous (in C++ at least), you get a compiler error. What about this C++ code?

void foo(int a, int b) {}
void foo(int a, float b) {}

void bar()
{
foo(1, 2);
foo(1.0, 2);
foo(1, 2.0);
}

which call goes to which function? Which ones are ambiguous? Which ones are errors? Which ones auto-promote? Which ones auto-demote? The answers are not what I would innocently expect, and all the cases I would consider ambiguous do not raise errors.

It's even more confusing when Booleans are involved.


As for difficulty of autopromoting integer variables. Wouldn't this only be a problem if the hardware implemented a seperate integer pipeline. This crosses over into what the implementation does, which is independent of what the parsing rules are. Autopromotion happens at compile time, not run time.

But, I'm digressing. OpenGL is in a position to not introduce different implementations with different opinions on what promotion rules should be. Let's do what we can to keep portability intact, and make it an error to deviate from the spec. without first requesting extensions to be enabled.

JohnK

Adruab
04-15-2004, 11:29 AM
I'm still not sure why the autopromotion/demotion of literals (those all seem pretty straight forward to me). It would only try and promote/demote them if the corresponding type wasn't there in other implementations. with 3 types it then could become a problem (adding bool as you say).

void foo(int a, int b);
void foo(int a, float b);
...
foo(true,false); //does what? error: ambiguous

or something like this could also cause errors:
void foo(float a, int b);
void foo(int a, float b);
...
foo(1.0,1.0); //does what? error: ambiguous

But solving the basic problem only requires that promotion and demotion be defined in the standard. Why exactly wouldn't this work? (bool - int - float)

My question is why not just define this relationship in the spec? And Perform it automatically for literals? And perhaps throw a warning/error or something when doing it for variables (since that takes extra work, if it's even possible)?

I definitely see where you're coming from with the extension thing, as it could get very messy, very quickly.

brcain
04-15-2004, 12:17 PM
Originally posted by John Kessenich:
What about this C++ code?

void foo(int a, int b) {}
void foo(int a, float b) {}

void bar()
{
foo(1, 2);
foo(1.0, 2);
foo(1, 2.0);
}

which call goes to which function? Which ones are ambiguous? Which ones are errors? Which ones auto-promote? Which ones auto-demote? The answers are not what I would innocently expect, and all the cases I would consider ambiguous do not raise errors.
JohnKThese issues are very clearly defined for anyone that bothers to understand C++. It's not rocket science. This thread has turned into nothing but a p*ssing contest.

foo(1, 2); // Calls 1st
foo(1.0, 2); // Calls 1st (conversion to int)
foo(1, 2.0); // Error (2.0 is double)
foo(1, 2.0f); // Okay

Adruab
04-15-2004, 12:29 PM
Yes, indeed. However, I am not sure why that makes this a p*ssing contest. I'm just trying to figure out why basic type promotion (especially for literals) is not included in the glsl spec.

EDIT: following above strategy for spelling the word p*ssing (I'm not sure when/why this turned into a bad word).

brcain
04-15-2004, 12:39 PM
Originally posted by Adruab:
I'm just trying to figure out why basic type promotion is not included in the glsl spec.
That's the point. Why indeed? Seems sort of like a basic thing to leave out -- as was stated from the beginning.

V-man
04-15-2004, 05:01 PM
Thought experiment. What happens if the ARB adds a new built-in function somedaygenType pow( const in genType base, const in int exponent ). Ooops.Solution: avoid the problem and keep the compiler simple.

powfi(float, int);
powi(int, int);
powf(float, float);
powd(double, double);
......

A lot of libs (including GL functions) do this.

And personally I dont plan on overloading anything. Function overloading leads to confusion.

Korval
04-15-2004, 08:30 PM
These issues are very clearly defined for anyone that bothers to understand C++. It's not rocket science.That's just the thing I don't get. These rules are defined and unambiguous. C and C++ spent a long time defining how this all works. All the glslang designers had to do was say, "Hey, we'll do what they're doing," and there would be no ambiguity.

The only time ambiguity becomes a problem is when glslang designers start trying to "improve" things that are not broken. When they start trying to impose their own narrow vision of "how to program" upon us.

Though, I feel that it is important to note again that this does not absolve nVidia of any wrongdoing. The spec is what it is (silly though it may be), and they should follow it.

bobvodka
04-15-2004, 09:10 PM
surely it is better to start with a spec which is slightly too constrained and relax it later once everyone is on the same page than try to do too much too soon?
Lack of auto-promotion removes totaly any chance of ambiguity from the language.
If this proves a problem and it can be dealt with easy enuff then the spec can be changed, version numbers can be upped and the rules relaxed a bit, old shaders still work, new shaders have a new way of doing things, everyone is happy.

Freebe
04-15-2004, 09:56 PM
Originally posted by bobvodka:
surely it is better to start with a spec which is slightly too constrained and relax it later once everyone is on the same page than try to do too much too soon?
Lack of auto-promotion removes totaly any chance of ambiguity from the language.
If this proves a problem and it can be dealt with easy enuff then the spec can be changed, version numbers can be upped and the rules relaxed a bit, old shaders still work, new shaders have a new way of doing things, everyone is happy.My thoughts exactly. But unfortunately this way of thinking is broken when vendors ignore the specs and relaxes the rules themselves, and this is a very serious matter since it can break the whole portability idea.

Thaellin
04-16-2004, 07:36 AM
I think this debate is a bit silly.

Look at this in the context of OpenGL conformance. We have an OpenGL specification. We have conformance tests which are used to verify that an implementation applies the specification. If your implementation does things contrary to the spec, then it is non-conformant. If it is non-conformant, you don't get to call it OpenGL.

Now, we also have an ARB-approved set of shader specs. Why should it be okay to write an implementation contrary to that specification and still call it GLSL?

If your OpenGL driver implementation enabled vertex arrays by default, simply because most programmers will want to use vertex arrays anyway, then most people wouldn't notice. Those who wanted to use standard sample code would wonder why most redbook apps did not work as expected on that implementation, and why other code (written to assume the leniency) did not work on their conformant implementation.

The question seems to be asking if non-conformant behavior should be considered conformant. The answer is obviously "no." The whole point of an industry standard specification is to BE an industry standard specification.

GLSL is currently an ARB extension, rather than a core feature, because some members of the ARB were able to successfully argue that the language needed field time to gather feedback from ISVs. The correct approach is to provide a fully conformant GLSL implementation, gather the feedback, and make sure the specification is altered - preferably BEFORE inclusion into the core. Providing an intentionally non-conformant implementation would only weaken the language and subvert the authority of the ARB.

Note that if getting the language right means delaying a core spec update to the point where it no longer coincides with a major graphics marketing event, then so be it. I can't think of any time when rushing something to market (especially after having already been beaten to market... twice) was good for a product.

On a related note, I hope the eagerness to produce "2.0" does not mean that "2.1" will have to be released a week later.

Thanks,
-- Jeff

Adruab
04-16-2004, 02:15 PM
As an issue of conformance, I completely agree. If nvidia is sticking things in by default that aren't in the spec, then smack them around.

However I totally see nvidia's point of view. I still don't know of any method of adding/enabling extensions in glsl (doesn't mean there isn't one...). If there really is no method, then GLSL completely limits their cards against their full potential.

These issues confuse me. If OpenGL is a hardware oriented language, which they use as justification in their spec for glsl decisions, then they should provide some method of extending it to allow current hardware to take advantage of it too. For things like the half type. It's a simple addition for nvidia. At least that's what I think.

Perhaps the solution to these problems is to come up with an extension method to expose new types/commands/etc. If there was something like that you could have a policy requiring either a fallback compatibility path, or to generate. Otherwise, the language forces vendors into it's box rather than letting them use their ingenuity and donate that to OpenGL. Forward thinking is one thing, but Forward Forcing (or backward for that matter) is another. Not necessarily bad... just another.

John Kessenich
04-19-2004, 10:12 AM
Originally posted by Adruab:
As an issue of conformance, I completely agree. If nvidia is sticking things in by default that aren't in the spec, then smack them around. Yes, there is general agreement that a vendor won't change default behavior.


However I totally see nvidia's point of view. I still don't know of any method of adding/enabling extensions in glsl (doesn't mean there isn't one...). If there really is no method, then GLSL completely limits their cards against their full potential. GLSL can be expanded. A vendor is allowed to extend the language to support their hardware.

I hope this is can be distinguished, at least in spirit, from syntax/semantic changes to the language that don't expose new hardware.


These issues confuse me. If OpenGL is a hardware oriented language, which they use as justification in their spec for glsl decisions, then they should provide some method of extending it to allow current hardware to take advantage of it too. For things like the half type. It's a simple addition for nvidia. At least that's what I think. Agreed. Adding 'half' to the language is an expected addition for some vendors to make. It should come along with a solid extension document that explains exactly what the changes are in the language relative to the core specification.


Perhaps the solution to these problems is to come up with an extension method to expose new types/commands/etc. If there was something like that you could have a policy requiring either a fallback compatibility path, or to generate.This is being done. For the case of 'half', the keyword was already reserved to make it easier.

JohnK