PDA

View Full Version : What is GL 3.0's hold up?



Korval
12-16-2007, 02:42 AM
Note: This thread is not for complaining about not having GL 3.0 yet. It is about speculating as to positive reasons for GL 3's postponement. If your reasoning for the delay is "the ARB can't get anything done," or something similarly non-constructive, we don't need your input in this thread. Thank you.

We all know that GL 3.0 has been delayed. The ARB was, around the time of SIGGRAPH, willing to set a fairly firm date for being done by the end of September. They seemed fairly confident in this.

So, why the delay?

Well, the last update didn't shed much light on the subject. Most of those decisions were fairly simple and didn't involve much thought or debate. Considering the cursory nature of the changes, it doesn't seem like these were the main points of discussion. Particularly if they're meeting 5 times a week as they claim.

So, why the delay?

No idea. But a 3-4 month delay would be long enough to quickly hash out a fairly hefty feature. Like, say, a major overhaul of glslang. Particularly, a lower-level one. After all, if you're going to break API compatibility, changing the shading language is certainly on the table.

And most important of all, glslang is, at present, the #1 cause of OpenGL implementation headaches. All of the other annoyances are annoyances, but glslang implementations have always been spotty. From nVidia's refusal to stop allowing non-compilant glslang code, to ATi's various failed attempts at implementing the language. If ease of implementation is a fundamental goal of GL 3.0, it wouldn't be hard to argue that a simpler, easier to compile, glslang would be a step in the right direction.

Jan
12-16-2007, 06:16 AM
I really hope your speculations turn out true, because GLSL is indeed a major problem and a complete redesign would be desireable, to make implementations more reliable (that doesn't mean the language should be changed entirely, i only mean every aspect of it should be reviewed and changed if necessary).

Other than GLSL i can't think of anything else, i mean, GLSL is THE ONLY thing, that was planned to persist (only slightly modified).

I don't think the ARB is doing anything wrong, i only think they were much too optimistic in the beginning. Designing an API that's supposed to be used for a decade or longer is something where firm release-dates are just impossible. It's done, when it's done.

Jan.

Trenki
12-16-2007, 06:27 AM
Yes, GLSL programming is hell. The syntax is all good althoug I found out that array initializers are only available since version 1.2 which is bad since not all cards support GLSL 1.2.

In DirectX you can at least specify the shaders model you want your stuff to be compiled against and then you can be sure it will run on the specified hardware.

With GLSL you have the differences between ATI and Nvidia an developing is much harder. Using the GLSLValidate tool from 3DLabs helps a lot in making your shaders standards compliand (I believe ATI use the same parser) but again, this tool does not support GLSL 1.2.

It would be nice if ATI and Nvidia would develop a standalone tool for shader compilation and linking, to make sure the shaders would actually run. I would like to be able to specify the GPU type (e.g. ATI X800 XT or Nvidia GeForce FX 5200) and then compile AND link the shaders and get the Info Log you would get when compiling on this card. This would make developlemnt much easier.

ATI and Nvidia probably don't have too much incentive to develop any such tool, so an alternative would be to have a distributet system. Say there is a server application running somewhere in the internet (e.g. http://www.opengl.org) which knows about which GPU connected client applications have. A client application could send a compilation and link request to the server which picks another client with the desired GPU and hand the request off to this client which does the work and returns the info log.
I think you get the idea.

What do you guys think about this?

[ www.trenki.net (http://www.trenki.net) | vector_math (3d math library) (http://www.trenki.net/content/view/16/36/) | software renderer (http://www.trenki.net/content/view/18/38/) ]

sqrt[-1]
12-16-2007, 06:42 AM
ATI and Nvidia probably don't have too much incentive to develop any such tool, so an alternative would be to have a distributet system. Say there is a server application running somewhere in the internet (e.g. http://www.opengl.org) which knows about which GPU connected client applications have. A client application could send a compilation and link request to the server which picks another client with the desired GPU and hand the request off to this client which does the work and returns the info log.
I think you get the idea.

What do you guys think about this?


I recall reading a similar suggestion back when GLSL first came out. Someone then said if we had to do this then GLSL has failed.

I also suspect that the delay is to do with GLSL. I honestly hope a intermediate language can be reached OR there is a common front end interface you have to use (AKA what Apple uses)

More speculating:
Perhaps they might even be considering using a intermediate like Direct3D and are hammering out details with MS?

Or perhaps Nvidia is pushing for Cg again? (like the last ditch attempt before GLSL got ratified in OGL2.0)

-NiCo-
12-16-2007, 07:48 AM
Personally, I wouldn't mind if Cg became the shading language standard. It already has a lot more functionality than the GLSL language.... it would also make the transition for games developed in DirectX to OpenGL a lot easier because the Cg and HLSL language specs are practically the same.

N.

Timothy Farrar
12-16-2007, 09:07 AM
Personally, I wouldn't mind if Cg became the shading language standard. It already has a lot more functionality than the GLSL language.... it would also make the transition for games developed in DirectX to OpenGL a lot easier because the Cg and HLSL language specs are practically the same.


I'll second that idea. GL adopting Cg as the shader language can only be a good idea for that exact reason. NVidia has already provided a perfectly portable language.

Face the facts, like it or not, games and DirectX are what control the industry (look at the primary market forces).

IMHO, only NVidia is providing any good (up to date) OpenGL support at this time anyway. Apple traditionally lags well behind (but is getting closer with its newest efforts). Sorry to say this (I'm a Linux guy), open source GL drivers are a joke, and only just catching up to having shader support. Vendors out sourcing driver support to open source projects is also a joke (for example look at the driver support on Linux for Intel GMA). Until this trend changes, AMD/ATi's open source efforts are bound to provide only minimal support for anything compared what is supported by the DirectX drivers. Also from what I've been told (again I'm a Linux guy), ATi's vendor driver GL support on Windows is also marginal (not even supporting vertex texture fetch on unified shader hardware?).

I don't mean to sound like a broken record here, but the GL3.0 spec is meaningless without good driver support, which is probably at best from anyone but NVidia is a few years off.

So if GL3.0 needs to have a better logical mapping to DirectX to make porting drivers easier (ie Cg instead of GLSL) then fine with me. I'd rather have something that works, then something ideal which never is implemented (or lags 4-5 years behind).

With any luck GL3.0 is delayed so that every vendor gets on board to insure good driver support.

pudman
12-16-2007, 10:35 AM
Maybe someone read the huge thread on precompiled binary shader objects and realized they needed to address that issue.

Maybe they want to release the spec when an alpha/beta driver is available from nvidia/ati. While they wait for this to happen they can continue to make sure the spec is "perfect". The logic here being "why release a spec that no one can use?" Then the reason they're not telling anyone they're doing this is that they don't want to hear demands for releasing the spec right now, as it might still change by the time an actual driver comes out.

Thor Henning Amdahl
12-16-2007, 12:04 PM
I hope they will skip Longs Peak and just come out with Mt. Evans as OpenGL 3.0. It will take some time to port old applications to 3.0 and in the mean time all the old hardware will be obsolete.

Older hardware can use the existing API (It will be supported for a long time).

Timothy Farrar
12-16-2007, 01:16 PM
Maybe they want to release the spec when an alpha/beta driver is available from nvidia/ati. While they wait for this to happen they can continue to make sure the spec is "perfect".

Awesome idea, hope they are doing it this way.

Korval
12-16-2007, 03:04 PM
Or perhaps Nvidia is pushing for Cg again? (like the last ditch attempt before GLSL got ratified in OGL2.0)

I doubt nVidia would be holding up GL 3.0 for something like that. While the most recent DX10-level cards from nVidia and ATi can now claim substantial performance improvements without truly stupid levels of cost, it is still in nVidia and ATi's best interests to get GL 3.0 out ASAP. Since Cg wouldn't actually solve any real problems with glslang, I don't see them being engaged in a lengthy debate on using it as the shading language.


GL adopting Cg as the shader language can only be a good idea for that exact reason. NVidia has already provided a perfectly portable language.

Cg, as a language, has all of the complexities of glslang. So adopting it doesn't make implementations any easier to write. Indeed, by simply changing one complex language into a different complex language, you make it more difficult to even use prior glslang version compilers without substantial parser modifications.

If they're majorly changing glslang, then it should be for the purpose of making it easier to implement, not for "portability".


Maybe someone read the huge thread on precompiled binary shader objects and realized they needed to address that issue.

The thing is that, as pointed out on that thread, it's not exactly something that needs 4 months of 5-times-a-week discussion.

The sheer quantity of time is the thing that suggests that the changes are non-trivial in terms of spec development time.


I hope they will skip Longs Peak and just come out with Mt. Evans as OpenGL 3.0.

No way. Supporting old hardware is vital for GL 3.0. If 3.0 can't support at least NV30/R300 hardware, then it will have almost all of the problems of Direct3D 10.

PkK
12-16-2007, 03:46 PM
Vendors out sourcing driver support to open source projects is also a joke (for example look at the driver support on Linux for Intel GMA).

I have been using a GMA 900 with the free drivers for some time. The X3100 driver even has glslang support. I wouldn't call the free drivers for intel graphics chips a joke. They're certainly better than Intel's OpenGL drivers on Windows.

-NiCo-
12-16-2007, 04:39 PM
Off topic:
@PkK: That's strange, I don't recall posting that. ;)

Somewhat on topic:
Sure, there is the problem of backward compatibility with GLSL applications when adopting the Cg shading language. But since HLSL and Cg are so similar I believe this would also tighten the gap between Nvidia and ATI support for OpenGL shaders. They both have good working HLSL parsers, so it shouldn't be that hard for ATI to develop Cg parsers, right?

N.

Jan
12-17-2007, 03:51 AM
I don't think they parse the HLSL shaders themselves. That's done by the D3D runtime (ie. Microsoft).

Also, i don't like Cg. GLSL is better (but not good enough).

Jan.

PkK
12-17-2007, 03:54 AM
@PkK: That's strange, I don't recall posting that.

Sorry. I removed the wrong quote header when quoting your post.

Since the hardware vendors in the ARB have some insight into how future graphics hardware will work it could be that the API will have to be changed so that it can be implemented on some future hardware they're working on. Such changes would have to be debated extensively since there's probably different competing approaches.

Philipp

Brolingstanz
12-17-2007, 05:29 AM
I think they're just dotting their i's and crossing their t's ;-)

Timothy Farrar
12-17-2007, 08:38 AM
I have been using a GMA 900 with the free drivers for some time. The X3100 driver even has glslang support. I wouldn't call the free drivers for intel graphics chips a joke. They're certainly better than Intel's OpenGL drivers on Windows.

Good to hear that they finally got shaders working (after X.org finally got GL2.0 support only later this year). Still if the drivers don't support the features ("DX10" level) and performance which Intel is advertising for the chipset, then it is a joke. Back when I purchased a motherboard with a X3?00 chip, the joke was on me because the "programmable shader" support advertised by the intellinuxgraphics.org site simply did not exist in the driver (at that time) in the form of fully functional vertex and fragment shader support (and also at the time no GLSL support). When one purchases a toaster, one expects to be able to make bread...

Lindley
12-17-2007, 12:00 PM
Also, i don't like Cg. GLSL is better (but not good enough).


As someone who hasn't taken the time to learn GLSL but finds Cg mostly pretty usable, I'm curious what advantages you see in GLSL.

tamlin
12-17-2007, 12:17 PM
While we are speculating, I can indeed add another hope for binary compatible "blobs" of shaders. As someone having designed binary formats, parsers and recompilers for both data and instruction sets in the past, I hope I understand some of the complexity for this.

If they got to the point of "Hey, an open ISA is really the only way we can do this" (shipping programs without source code) it could easily account for several months of design discussons (if it was me, I'd hire an office and put all involved designers in it - no matter what company they came from - and not let them out until they all could sign on the "I'm happy with this!" line :-) ).

Though, I must say I'd then also be really clear and up-front about it to the waiting developers. That we have effectively heard nothing about the reason for this delay is more discouraging. It could suggest internal battles. It could also suggest something worse.

pudman
12-17-2007, 01:02 PM
When one purchases a toaster, one expects to be able to make bread...

I hope the delay is not caused by making a toaster that makes bread. I prefer my toasters to toast bread, not make it. :p

knackered
12-17-2007, 01:08 PM
Yes I second that - I prefer toasters to toast bread, not make it. Let's not change the role of toasters please.

HenriH
12-17-2007, 02:56 PM
And most important of all, glslang is, at present, the #1 cause of OpenGL implementation headaches. All of the other annoyances are annoyances, but glslang implementations have always been spotty. From nVidia's refusal to stop allowing non-compilant glslang code, to ATi's various failed attempts at implementing the language.


As I am a new developer using glslang, I don't quite understand this. Why are there implementation headaches in the GLSL? Isn't there supposed to be a 3DLabs provided reference parser implementation, so why can't ATi, nVidia and the others to just use that?

Cheers.

V-man
12-17-2007, 03:34 PM
[quote]
As I am a new developer using glslang, I don't quite understand this. Why are there implementation headaches in the GLSL? Isn't there supposed to be a 3DLabs provided reference parser implementation, so why can't ATi, nVidia and the others to just use that?


ATI uses it but I imagine they have updated.
nVidia uses their Cg codebase instead of 3dlabs.

Parsing is only the first stage.
There is optimization to do in terms of temp register usage, code simplification, const register mapping. It gets more complex when you have to deal with 3 or 4 generations of GPUs.

Next you have the driver's logic to perfect.

"oh, the user called glUniform 15 times in sequence, shall we send this to the GPU now?"
"oh wait, he called glLightf, is there a shader bound, should we update that constant register?"

and the list goes on and on.

HenriH
12-17-2007, 04:26 PM
I stand corrected. Thanks.

Brolingstanz
12-18-2007, 08:32 AM
Maybe I missed something, but one potential problem area with GLSL, once you remove the built-ins, is replacing the functionality of gl_*In[] in the geometry shader stage.

I wonder what the replacement is going to look like, or how one might go about achieving the same result with as much grace and aplomb.

cass
12-18-2007, 10:29 AM
I am not tracking this very closely (so I could be wrong), but I don't believe GLSL issues are gating GL3 at present.

Personally, I think the question of the "ideal" model for programmable shading is still unanswered. Like Korval, I think a simpler machine abstraction (or abstractions! - multiple abstractions like SM2, SM3, etc, exist in the driver whether it tries to hide them from the developer or not) is very attractive for driver implementation. At the same time, as GPUs support more flexible and general-purpose execution environments, the desire to use more powerful source-level languages increases. Languages like (but not limited to) C++ and C#.

Ultimately, I find it hard to imagine how we won't wind up with a compiler/linker/loader environment on the GPU much like we have with conventional CPU programming. I would rather leave the arguments over high-level language definition to the Stroustrups of the world. I feel pretty sure they'll do a better job than me. :) And I don't think it's helpful for every API to have its own non-portable high-level language when portability needn't be a problem.

Just to be clear, this is my personal view.

ZbuffeR
12-18-2007, 10:48 AM
I am not tracking this very closely (so I could be wrong), but I don't believe GLSL issues are gating GL3 at present.

So, what do you believe to be the current issue(s) for the delay ?
Even a non official, personal view etc would suffice :)

cass
12-18-2007, 11:22 AM
Sorry - I really don't know. I could find out, but then I couldn't tell you. :)

Maybe Barthold will post an update.

PkK
12-18-2007, 11:26 AM
Since the ARB wants S3TC in GL3 they might be trying to negotiate a S3TC patent licence covering all OpenGL implementations (like Microsoft did with DirectX).
I hope they do (or remove S3TC from GL3).

Philipp

PkK
12-18-2007, 11:36 AM
Personally, I think the question of the "ideal" model for programmable shading is still unanswered. Like Korval, I think a simpler machine abstraction (or abstractions! - multiple abstractions like SM2, SM3, etc, exist in the driver whether it tries to hide them from the developer or not) is very attractive for driver implementation. At the same time, as GPUs support more flexible and general-purpose execution environments, the desire to use more powerful source-level languages increases. Languages like (but not limited to) C++ and C#.

Ultimately, I find it hard to imagine how we won't wind up with a compiler/linker/loader environment on the GPU much like we have with conventional CPU programming. I would rather leave the arguments over high-level language definition to the Stroustrups of the world. I feel pretty sure they'll do a better job than me. :) And I don't think it's helpful for every API to have its own non-portable high-level language when portability needn't be a problem.


C is today's portable assembler. Why not let the Stroustrups layer their high-level languages (potentially including stuff like object-orientation or templates) on top of GLSL?

I think this is more future-proof (in the sense of working well on future GPU architectures) than some lower level language like SMwhatever that might not map so well to the future's hardware.

Philipp

knackered
12-18-2007, 11:48 AM
I'd support that approach.
In fact, I've already written my own C++ pre-compiler for GLSL. It's very useful for code re-use.