What is GL 3.0's hold up?

Note: This thread is not for complaining about not having GL 3.0 yet. It is about speculating as to positive reasons for GL 3’s postponement. If your reasoning for the delay is “the ARB can’t get anything done,” or something similarly non-constructive, we don’t need your input in this thread. Thank you.

We all know that GL 3.0 has been delayed. The ARB was, around the time of SIGGRAPH, willing to set a fairly firm date for being done by the end of September. They seemed fairly confident in this.

So, why the delay?

Well, the last update didn’t shed much light on the subject. Most of those decisions were fairly simple and didn’t involve much thought or debate. Considering the cursory nature of the changes, it doesn’t seem like these were the main points of discussion. Particularly if they’re meeting 5 times a week as they claim.

So, why the delay?

No idea. But a 3-4 month delay would be long enough to quickly hash out a fairly hefty feature. Like, say, a major overhaul of glslang. Particularly, a lower-level one. After all, if you’re going to break API compatibility, changing the shading language is certainly on the table.

And most important of all, glslang is, at present, the #1 cause of OpenGL implementation headaches. All of the other annoyances are annoyances, but glslang implementations have always been spotty. From nVidia’s refusal to stop allowing non-compilant glslang code, to ATi’s various failed attempts at implementing the language. If ease of implementation is a fundamental goal of GL 3.0, it wouldn’t be hard to argue that a simpler, easier to compile, glslang would be a step in the right direction.

I really hope your speculations turn out true, because GLSL is indeed a major problem and a complete redesign would be desireable, to make implementations more reliable (that doesn’t mean the language should be changed entirely, i only mean every aspect of it should be reviewed and changed if necessary).

Other than GLSL i can’t think of anything else, i mean, GLSL is THE ONLY thing, that was planned to persist (only slightly modified).

I don’t think the ARB is doing anything wrong, i only think they were much too optimistic in the beginning. Designing an API that’s supposed to be used for a decade or longer is something where firm release-dates are just impossible. It’s done, when it’s done.

Jan.

Yes, GLSL programming is hell. The syntax is all good althoug I found out that array initializers are only available since version 1.2 which is bad since not all cards support GLSL 1.2.

In DirectX you can at least specify the shaders model you want your stuff to be compiled against and then you can be sure it will run on the specified hardware.

With GLSL you have the differences between ATI and Nvidia an developing is much harder. Using the GLSLValidate tool from 3DLabs helps a lot in making your shaders standards compliand (I believe ATI use the same parser) but again, this tool does not support GLSL 1.2.

It would be nice if ATI and Nvidia would develop a standalone tool for shader compilation and linking, to make sure the shaders would actually run. I would like to be able to specify the GPU type (e.g. ATI X800 XT or Nvidia GeForce FX 5200) and then compile AND link the shaders and get the Info Log you would get when compiling on this card. This would make developlemnt much easier.

ATI and Nvidia probably don’t have too much incentive to develop any such tool, so an alternative would be to have a distributet system. Say there is a server application running somewhere in the internet (e.g. http://www.opengl.org)) which knows about which GPU connected client applications have. A client application could send a compilation and link request to the server which picks another client with the desired GPU and hand the request off to this client which does the work and returns the info log.
I think you get the idea.

What do you guys think about this?

[ www.trenki.net | vector_math (3d math library) | software renderer ]

I recall reading a similar suggestion back when GLSL first came out. Someone then said if we had to do this then GLSL has failed.

I also suspect that the delay is to do with GLSL. I honestly hope a intermediate language can be reached OR there is a common front end interface you have to use (AKA what Apple uses)

More speculating:
Perhaps they might even be considering using a intermediate like Direct3D and are hammering out details with MS?

Or perhaps Nvidia is pushing for Cg again? (like the last ditch attempt before GLSL got ratified in OGL2.0)

Personally, I wouldn’t mind if Cg became the shading language standard. It already has a lot more functionality than the GLSL language… it would also make the transition for games developed in DirectX to OpenGL a lot easier because the Cg and HLSL language specs are practically the same.

N.

I’ll second that idea. GL adopting Cg as the shader language can only be a good idea for that exact reason. NVidia has already provided a perfectly portable language.

Face the facts, like it or not, games and DirectX are what control the industry (look at the primary market forces).

IMHO, only NVidia is providing any good (up to date) OpenGL support at this time anyway. Apple traditionally lags well behind (but is getting closer with its newest efforts). Sorry to say this (I’m a Linux guy), open source GL drivers are a joke, and only just catching up to having shader support. Vendors out sourcing driver support to open source projects is also a joke (for example look at the driver support on Linux for Intel GMA). Until this trend changes, AMD/ATi’s open source efforts are bound to provide only minimal support for anything compared what is supported by the DirectX drivers. Also from what I’ve been told (again I’m a Linux guy), ATi’s vendor driver GL support on Windows is also marginal (not even supporting vertex texture fetch on unified shader hardware?).

I don’t mean to sound like a broken record here, but the GL3.0 spec is meaningless without good driver support, which is probably at best from anyone but NVidia is a few years off.

So if GL3.0 needs to have a better logical mapping to DirectX to make porting drivers easier (ie Cg instead of GLSL) then fine with me. I’d rather have something that works, then something ideal which never is implemented (or lags 4-5 years behind).

With any luck GL3.0 is delayed so that every vendor gets on board to insure good driver support.

Maybe someone read the huge thread on precompiled binary shader objects and realized they needed to address that issue.

Maybe they want to release the spec when an alpha/beta driver is available from nvidia/ati. While they wait for this to happen they can continue to make sure the spec is “perfect”. The logic here being “why release a spec that no one can use?” Then the reason they’re not telling anyone they’re doing this is that they don’t want to hear demands for releasing the spec right now, as it might still change by the time an actual driver comes out.

I hope they will skip Longs Peak and just come out with Mt. Evans as OpenGL 3.0. It will take some time to port old applications to 3.0 and in the mean time all the old hardware will be obsolete.

Older hardware can use the existing API (It will be supported for a long time).

Awesome idea, hope they are doing it this way.

Or perhaps Nvidia is pushing for Cg again? (like the last ditch attempt before GLSL got ratified in OGL2.0)

I doubt nVidia would be holding up GL 3.0 for something like that. While the most recent DX10-level cards from nVidia and ATi can now claim substantial performance improvements without truly stupid levels of cost, it is still in nVidia and ATi’s best interests to get GL 3.0 out ASAP. Since Cg wouldn’t actually solve any real problems with glslang, I don’t see them being engaged in a lengthy debate on using it as the shading language.

GL adopting Cg as the shader language can only be a good idea for that exact reason. NVidia has already provided a perfectly portable language.

Cg, as a language, has all of the complexities of glslang. So adopting it doesn’t make implementations any easier to write. Indeed, by simply changing one complex language into a different complex language, you make it more difficult to even use prior glslang version compilers without substantial parser modifications.

If they’re majorly changing glslang, then it should be for the purpose of making it easier to implement, not for “portability”.

Maybe someone read the huge thread on precompiled binary shader objects and realized they needed to address that issue.

The thing is that, as pointed out on that thread, it’s not exactly something that needs 4 months of 5-times-a-week discussion.

The sheer quantity of time is the thing that suggests that the changes are non-trivial in terms of spec development time.

I hope they will skip Longs Peak and just come out with Mt. Evans as OpenGL 3.0.

No way. Supporting old hardware is vital for GL 3.0. If 3.0 can’t support at least NV30/R300 hardware, then it will have almost all of the problems of Direct3D 10.

I have been using a GMA 900 with the free drivers for some time. The X3100 driver even has glslang support. I wouldn’t call the free drivers for intel graphics chips a joke. They’re certainly better than Intel’s OpenGL drivers on Windows.

Off topic:
@PkK: That’s strange, I don’t recall posting that. :wink:

Somewhat on topic:
Sure, there is the problem of backward compatibility with GLSL applications when adopting the Cg shading language. But since HLSL and Cg are so similar I believe this would also tighten the gap between Nvidia and ATI support for OpenGL shaders. They both have good working HLSL parsers, so it shouldn’t be that hard for ATI to develop Cg parsers, right?

N.

I don’t think they parse the HLSL shaders themselves. That’s done by the D3D runtime (ie. Microsoft).

Also, i don’t like Cg. GLSL is better (but not good enough).

Jan.

Sorry. I removed the wrong quote header when quoting your post.

Since the hardware vendors in the ARB have some insight into how future graphics hardware will work it could be that the API will have to be changed so that it can be implemented on some future hardware they’re working on. Such changes would have to be debated extensively since there’s probably different competing approaches.

Philipp

I think they’re just dotting their i’s and crossing their t’s :wink:

Good to hear that they finally got shaders working (after X.org finally got GL2.0 support only later this year). Still if the drivers don’t support the features (“DX10” level) and performance which Intel is advertising for the chipset, then it is a joke. Back when I purchased a motherboard with a X3?00 chip, the joke was on me because the “programmable shader” support advertised by the intellinuxgraphics.org site simply did not exist in the driver (at that time) in the form of fully functional vertex and fragment shader support (and also at the time no GLSL support). When one purchases a toaster, one expects to be able to make bread…

As someone who hasn’t taken the time to learn GLSL but finds Cg mostly pretty usable, I’m curious what advantages you see in GLSL.

While we are speculating, I can indeed add another hope for binary compatible “blobs” of shaders. As someone having designed binary formats, parsers and recompilers for both data and instruction sets in the past, I hope I understand some of the complexity for this.

If they got to the point of “Hey, an open ISA is really the only way we can do this” (shipping programs without source code) it could easily account for several months of design discussons (if it was me, I’d hire an office and put all involved designers in it - no matter what company they came from - and not let them out until they all could sign on the “I’m happy with this!” line :slight_smile: ).

Though, I must say I’d then also be really clear and up-front about it to the waiting developers. That we have effectively heard nothing about the reason for this delay is more discouraging. It could suggest internal battles. It could also suggest something worse.

I hope the delay is not caused by making a toaster that makes bread. I prefer my toasters to toast bread, not make it. :stuck_out_tongue:

Yes I second that - I prefer toasters to toast bread, not make it. Let’s not change the role of toasters please.