Portability

Hi,

As an author of the OpenGL Shading Language Specification, I’m concerned about its portability.

One thing the specification says regarding the motivation of a high-level language is

A desire to expose the extended capability of the hardware has resulted in a vast number of extensions being written and an unfortunate consequence of this is to reduce, or even eliminate, the portability of applications, thereby undermining one of the key motivating factors for OpenGL.
Now, a portable language is being introduced to address this problem. But, if it is immediately loaded with a lot of vendor specific extensions, then we end up in the same position we started, regarding difficultly managing a lot of extensions and lack of portability.

If a vendor has new graphics hardware ability that requires an actual change to the basic semantics of this language (rare), then changing the language with an extension to support this new feature makes sense. But, if the language is freely changed for reasons other than new feature support, that’s something that will hurt everyone, without anyone seeing a better picture on the screen.

I hope this community will demand that their vendors [ul][li]provide portable versions of the language [*]ensure extensions have significant value not otherwise attainable with the portable version of the language. [/ul][/li]Thanks,
JohnK

My main gripe with this situation is not that NVidia extended the language – I see absolutely nothing wrong with that. The only condition, of course, is that they provide an actual OpenGL extension spec (e.g. GL_ARB_shading_language_101 or whatever), and that this specification clearly documents the added functionality. Maybe this is what EXT_Cg_shader does, but unfortunately the spec to that hasn’t been made available yet.

Of course I would still need a way to switch the added stuff off. For compatibility’s sake, I need to be able to specify which version of the language spec I want my shaders to compile against. We had this with the “!!ARBfp1.0”-style header in the “old” extensions, we should have it in GLSL as well (e.g. in the form of a preprocessor directive). If this directive is absent, my personal preference would be to default to the unextended specification, but I’m not fussy about this.

It’s important that vendors agree on the form of these directives, though. I don’t want to have to add one directive that tells ATI’s compiler to adhere to the GLSL version 1.0 spec, and second one to tell NVidia’s compiler to do the same thing. Implementations must also respect the directives when they are there. We don’t need things like the ARB_fp shadow mapping disaster to happen again.

– Tom

The switch should be to swich extended stuff on. and default it should conform to a spec. And I agree on that you should only add things that makes a real difference for the underlaying HW. Why add typecast like (vec3)variable, when vec3(variable) exists? seems only confusing. But adding a refract function may be efficient since its possible to let the HW calculate that instead of doing that in multiple instructions. (still it should be enabled by a #pragma extended_set_name )

And when where at it, it should be nice to specify which version of glsl you want to compile against. When opengl extends to _102 or similar, it would really help developers to specify that we want to test this particular shader against a _100 compiler without having to reinstall old drivers. Maybe that shouldn’t be a part of the driver, but in that case we need an offline compiler that can test the shaders for different versions.

John Kessenich, I agree with you.

In this case, there are things in the NVIDIA extension that I like as the inclusion of the half type where you can use things like:
#ifndef half
#define half float
#endif
#ifndef half2
#define half2 vec2
#define half3 vec3
#define half4 vec4
#endif

So you don’t have problems with other hw.

But the other things like casting or the integer to float promotion are things that I prefer not having them in.
Other things like alias varying variables are, in my opinion, just wrong.

Each C/C++ compiler has its own specifics. But they have options to disable language extensions (for example, in Visual Studio you can find this option in project properties->C/C+±>Language) and/or to show portability issues.
In my opinion, because of the nature of glsl that are compiled/linked in each machine, those options should be enabled by default and there should be something like #pragma warning(disable:GLSL_portability) to disable them.
This way, if you are aware that you are using those language extensions but you don’t worry about, you will not see those warnings in your log.

Hope this helps.

I agree with all of you.
I think Mazy´s idea is very good. At the moment we only have _100, but that will change some day and there should be a way to handle this.

However i don´t think, that on default the compiler should spit out warnings, i think it should simply not compile! A warning is still a thing one needs to check.
In my opinion a good compiler should
a) by default not compile a wrong shader
b) put out warnings, if you use an extended version but want to test for compatibility
c) only compile without warnings if you explicitly state, that you use an extended version

Jan.

It is quite obvious this thread and others are stemming from the “Tom’s demo” thread.

Originally posted by John Kessenich:

I hope this community will demand that their vendors [ul][li]provide portable versions of the language [*]ensure extensions have significant value not otherwise attainable with the portable version of the language. [/ul][/li]> Thanks,
JohnK

  1. Agreed, but the flexibility the current NV implementation offers is nothing to cry about. It’s a flexibility, a bonus, an add-on, an extra …
    doesn’t need an extension at all.
  2. I specially agree with the second point.
    If too many extensions is released in a short time, it will be ugly. I’m assuming you mean vendor specific extension. Also, I find it ridiculous to have extensions that simply enable us to write code slightly differently.

As for what Tom said, I was thinking the same thing. How do you specify the version? It’s possible to have a GL function that sends a command line string.

Originally posted by Cab:
But the other things like casting or the integer to float promotion are things that I prefer not having them in.

I, for one, think that without integer to float promotion, the inability to compile pow(3.0f, 16) or 25.0f * 4 is just lame, specially since more complex things like float * vec3 are allowed.

So either you extend all the operations to be defined when you operate floats and ints (nice combinatorial explosion) or you allow promotion. I would go for the promotion.

Regarding the portability issues, in my book current NVIDIA’s implementation of GLSL is outright wrong. I don’t care if it’s more functional or better, if you digress from the standard it’s no longer the standard, it’s a different beast.

I’m ok with any extensions they want to add and they can switch them on in any way they want: glHints, #pragmas or whatever, but invalid programs should not compile by default.

Originally posted by V-man:

  1. Agreed, but the flexibility the current NV implementation offers is nothing to cry about. It’s a flexibility, a bonus, an add-on, an extra …
    doesn’t need an extension at all.

but it also makes the code unportable and undermines the whole point of the GLSL.
Look at it this way, i write some GLSL code on my 9800xt, it compiles and works and looks pretty. I then send you the program (assuming you have a GFFX), you run it and tada, it all works and we both go ‘ooooo, isnt it pretty’.
Then, you write a shader using the current drivers with their ‘added bonus of flexibility’, it it runs and looks pretty on your machine, you then send it to me, i try to run it on my 9800xt and it goes ‘arrgh, its wrong!’ at me at which point you either have to (a) fix it or (b) write a new version for all the ATI users out there, both of which cost you time and causes you effort instead of just following the specs and writing proper code like you should have done.
This is why its a bad bad idea to allow this stuff past by default.

evanGLizr : 2.2*2 = 4.4 or 4.0?

float * float always ends up with a float.
float * int must specify a result.

so instead the spec says that float must be written as 1.0… Java has the same spcification, and i dont believe that someone made a relaxed java compiler?

I, for one, think that without integer to float promotion, the inability to compile pow(3.0f, 16) or 25.0f * 4 is just lame, specially since more complex things like float * vec3 are allowed.

A vec3 is a 3 component vector of floats! Of course this should work. If your code was float * ivec3 it would not compile. I think the main issue here is an integer looks like “1” while a float looks like “1.”, “1.0”, etc. An integer is not a float, and a float is not an integer. Why intentionally add ambiguity to a new language?

I backup evanGLizr’s statement. Especially if you’re used to a language like Delphi that is smart and automatically uses the datatype that fits an operation, it’s in my eyes totally annoying that you have to do e.g. an explicit 1.0 * 4.4 to get a float. It would be much better for the workflow if you could just write 1*4.4 end the compiler determines that this’ll only fit into a float and not an int.
So how smart can a compiler be if smthing like MyFloat := 1 * 4.4 results in an error? If the compiler knows what datatype the result has in that an operation gets written, then why not let it extend boths operators?

And as far as any vendor extending the glsl via own extensions : I don’t like it. Especially if you’re a hobbyist and you write shaders on HW that supports those extensions, you’ll get problems on other HW. And especially as NV has their cG, I don’t really unterstand why they feel the urgent need to “fix” (in my opinion make it worse) the glsl. If they don’t like it, then why don’t they just implement GLSL as it is in their drivers and do additional stuff in their own (“oh so much better…”) shading language? That’s so great about a standard. You do it on one graphicscard and it’ll work on all others (that support the standard), but if one vendor throws in new features, then that standard will get (mostly over the time) more and more loose and fall apart. In my eyes NVidia is going the wrong way, it’s kind of like extending a DIN A4-paper some mms, so that you can write some more letters on it…sure it’s a “feature”, but if you get used to it you’ll get problems when you’ll have to return to the standard.

Just to sum it up : If the change introduced into glsl by a new extension isn’t a huge leap, then I’m strictly against that extension.

As an author of the OpenGL Shading Language Specification
Ahh, so you’re the one responsible for some of the nonsense in the language :wink:

An integer is not a float, and a float is not an integer. Why intentionally add ambiguity to a new language?
You mean, besides the fact that virtually all programming languages allow for promotion of what are numbers? The only reason that there is a difference between an int and a float is because floats sometimes have problems with getting integral precision.

The numeric value “1” is functionally identical to the numeric value “1.0”. With the exception of scientific precision, these refer to the same number. As such, when we use them in code, we would like code to understand that there is no functional difference between a literal number times any numeric type.

In short, numbers should not have type until they need to. Once they do, the type should be chosen in the most reasonable fashion possible, ie the one that allows the code to compile to the programmers intent. If I write “1 * float”, we all know what I’m talking about. Every C, C++, C#, Pascal, Java, BASIC of all forms, etc compiler knows what I’m talking about. Why can’t glslang? Is it just too obvious for you what the correct behavior ought to be?

Not allowing automatic promotion of literals is just nonsense. A case can be made for actual integer values (ie, requiring explicit cast/constructor operations), but literals do not have, nor need, types. It is only in the context of their use that their type needs to be determined.

This doesn’t make nVidia right, btw. Slipping this functionality in is still wrong. They just have a good point about the language being poorly designed.

Originally posted by kingjosh:
[b] [quote] I, for one, think that without integer to float promotion, the inability to compile pow(3.0f, 16) or 25.0f * 4 is just lame, specially since more complex things like float * vec3 are allowed.

A vec3 is a 3 component vector of floats! Of course this should work. If your code was float * ivec3 it would not compile. I think the main issue here is an integer looks like “1” while a float looks like “1.”, “1.0”, etc. An integer is not a float, and a float is not an integer. Why intentionally add ambiguity to a new language?[/b][/QUOTE]Ambiguity? what ambiguity?
float * int ~= float + float + … + float
Now tell me what ambiguity is there in adding a float an integer number of times?

The same goes for pow(float, int) :
pow(float, int) ~= float * float * … * float

Where do you see the ambiguity?

Originally posted by Korval:
[b] [quote]As an author of the OpenGL Shading Language Specification
Ahh, so you’re the one responsible for some of the nonsense in the language :wink:

An integer is not a float, and a float is not an integer. Why intentionally add ambiguity to a new language?
You mean, besides the fact that virtually all programming languages [/b][/QUOTE]Just to clarify, JohnK did not write the text in the second quote. Someone else did.

Here’s some history:

We did discuss auto-promotion for GLSL, and decided not to push for it in this release. It is a difficult topic to get specified right, and it was decided to postpone it so that GLSL could be released in a timely manner. JohnK has already stated in a different thread that he would support some auto-promotion. The simple fact is that it has not been thought through enough yet to be comfortable it can be added in such a way we are not going to regret it later.

Barthold

after a good night sleep i come tho this conclusion…

Extending the spec should be ok ( and handled according to the result of the poll i guess)
but changing the languages SYNTAX shouldnt be ok.

That means right now that no casting and no implicit type conversion should take place, but the refraction function, and even an extra function named frac() actually should be ok ( even if i dont see the point of it :slight_smile: )

Just wanted to clearify my point noe when i got a grip of it myself.

So what if someone wants to extend the language with a 16-bit floating point type? Or any new type for that matter, double precision floats perhaps? Is that a change of syntax?

Actually, as a side note, a lower precision floating point data type would be nice to have since it would reduce memory footprints in certain situations. Performing computation in lower precision isn’t very interesting (except for fixed function stuff like blending and filtering, but that doesn’t affect glslang), but as a storage format I think it’s useful.

By the way, I agree that implicit promotion of integer literals to floating point should be done. Do you 3DLabs guys have any concrete examples when you think this would be dangerous?

adding types are syntax change, sinde new types might add noew behaivor for operators, but double and half is considered in the original spec as reserved keywords.

And there I also wonder what they where thinking about since you can consider them as hints unless you actually can use them in the HW. what problem should it be for ati to just ‘read’ hvec4 as a normal vec4 in their compiler?
As long as the hint behaivor is defined in the spec and that you shouldnt depend on the number of bits in precision you actually get, or that you could query the current implementations format with glGet* for the different types it should all be fine.

For the implicit conversion you always get a problem with dividers, atleast the existing languages have different approaches to decisde if its a integer div or a float div.

C-languages and java uses the syntax of, if any of the numbers are forced float, the divide will be float, and Delphi have an own keyword for integer divs. but i guess there can be other problems as well. The key issue here is, should you change this in just one implementation, or should you try to affect the spec, so glsl behaives the same on all implementations?

IMO auto promotion is not really necessary. When I write C++ code I always do proper casting and number postfixes because I hate it when my compiler gives me warnings about possible loss of precision :wink: , so I explicitly state what I want. This way, I and my compiler have the same understanding of what’s going on there. So I could live without it.

Copy & Paste of a current project.

for(int i=0; i < m_bitmap.Width; ++i)
  for(int j=0; j < m_bitmap.Height; ++j)
  {
    float p = (float)i/(float)m_bitmap.Width;
    BYTE c = (BYTE)(p*255.0f);
  }

But otherwise why not allow it and make those who think it is necessary happy?

As barthold said, it was not ready at in the time frame. So at least we have somet GLSL to play with at the moment…

<Off Topic>
I would prefer to have the types like vec2 ivec3 renamed to float2 and int3 because this corresponds better to my personal understanding of nice syntax :wink:

Ambiguity? what ambiguity?
One entry found for ambiguous.

Main Entry: am·big·u·ous
Pronunciation: am-'bi-gy&-w&s
Function: adjective
Etymology: Latin ambiguus, from ambigere to be undecided, from ambi- + agere to drive – more at AGENT
2 : capable of being understood in two or more possible senses or ways

The danger and ambiguity in doing autopromotion is most clear when doing signature matching to overloaded functions. A general autopromotion capability in the language must address these ambiguities, preferably with a simple specification that’s easy for everyone to implement correctly and doesn’t create difficult to find defects due to the wrong function being silently called.

This suggests a more limited approach. It seems harmless enough to state that autopromotion occurs with literals in simple expressions. But, care must be taken as to where to draw the line.

This also relates to why types like half are not trivially put in the language as a hint: a complete consistent set of promotion rules for half were never laid out, so it remained reserved.

JohnK