PDA

View Full Version : The ARB announced OpenGL 3.0 and GLSL 1.30 today



Pages : [1] 2 3

Khronos_webmaster
08-11-2008, 10:04 AM
Here is the link to the spec: http://www.khronos.org/opengl/ (http://www.khronos.org/opengl)

The Khronos(TM) Group announced today it has released the OpenGL(R) 3.0 specification with strong industry support to bring significant new functionality to the open, cross-platform standard for 3D graphics acceleration. OpenGL 3.0 includes GLSL(TM) 1.30, a new version of the OpenGL shading language, and provides comprehensive access to the functionality of the latest generations of programmable graphics hardware. The OpenGL working group has also defined a set of OpenGL 3.0 extensions that expose potential new functionality for the next version of OpenGL that is targeted for release in less than 12 months, and a set of extensions for OpenGL 2.1 to enable much of the new OpenGL functionality on older hardware. Additionally, OpenGL 3.0 introduces an evolutionary model to assist in streamlining the specification and to enable rapid development of the standard to address diverse markets. Finally, the OpenGL working group has announced that it is working closely with the emerging OpenCL standard to create a revolutionary pairing of compute and graphics programming capabilities. The new OpenGL 3.0 specifications are freely available at http://www.khronos.org/opengl .

The OpenGL 3.0 specification enables developers to leverage state-of-the-art graphics hardware, including many of the graphics accelerators shipped in the last two years both on Windows XP and Windows Vista as well as Mac OS and Linux. According to Dr. Jon Peddie of Jon Peddie Research, a leading graphics market analyst based in California, the installed base of graphics hardware that will support OpenGL 3.0 exceeds 60 million units. AMD, Intel and NVIDIA have made major contributions to the design of OpenGL 3.0 and today all three companies announced their intent to provide full implementations within their product families. Additionally, the OpenGL working group includes the active participation of leading developers such as Blizzard Entertainment and TransGaming that have played a vital role in ensuring that the specification meets the genuine needs of the software community. "We are very pleased to see the release of OpenGL 3.0, which includes numerous features and extensions that will help us and other ISVs bring amazing gaming content to OpenGL-based platforms," commented Gavriel State, founder & CTO of TransGaming, Inc.

OpenGL 3.0 introduces dozens of new features including:
-- Vertex Array Objects to encapsulate vertex array state for easier programming and increased throughput;
-- non-blocking access to Vertex Buffer Objects with the ability to update and flush a sub-range for enhanced performance;

-- full framebuffer object functionality including multi-sample buffers, blitting to and from framebuffer objects, rendering to one and two-channel data, and flexible mixing of buffer sizes and formats when rendering to a framebuffer object;

-- 32-bit floating-point textures and render buffers for increased precision and dynamic range in visual and computational operations;

-- conditional rendering based on occlusion queries for increased performance;
-- compact half-float vertex and pixel data to save memory and bandwidth;
-- transform feedback to capture geometry data after vertex transformations into a buffer object to drive additional compute and rendering passes;

-- four new texture compression schemes for one and two channel textures providing a factor of 2-to-1 storage savings over uncompressed data;

-- rendering and blending into sRGB framebuffers to enable faithful color reproduction for OpenGL applications without adjusting the monitor's gamma correction;

-- texture arrays to provide efficient indexed access into a set of textures;
-- 32-bit floating-point depth buffer support.

The new version of the OpenGL Shading Language, GLSL 1.30, provides front-to-back native integer operations including full integer-based texturing, integer input and outputs for vertex and fragment shaders and a full set of integer bitwise operators. It also improves compatibility with OpenGL ES, adds new interpolation modes, includes new forms of explicit control over texturing operations, provides additional built-in functions for manipulating floating-point numbers and introduces switch statements for enhanced flow control within shader programs.

The OpenGL working group has also released a set of extensions to OpenGL 3.0 that can be immediately used by developers and, after industry feedback, will potentially be included in the next generation of OpenGL targeted for release in less than 12 months. These extensions include geometry shaders, further instancing support, and texture buffer objects.

Khronos today also released a number of extensions to OpenGL 2.1 which enables some of the new features in OpenGL 3.0 to be used on older generations of hardware. These extensions include enhanced VBOs, full framebuffer object functionality, half float vertices, compressed textures, vertex array objects and sRGB framebuffers.

Additionally, OpenGL 3.0 defines an evolutionary process for OpenGL that will accelerate market-driven updates to the specification. The new OpenGL API supports the future creation of profiles to enable products to support specific market needs while not burdening every implementation with unnecessary costs. To avoid fragmentation, the core OpenGL specification will contain all defined functionality in an architecturally coherent whole, with profiles tightly specifying segment-relevant subsets. OpenGL 3.0 also introduces a deprecation model to enable the API to be streamlined while providing full visibility to the application developer community, enabling the API to be optimized for current and future 3D graphics architectures.

Finally, the OpenGL working group is working closely with the newly announced OpenCL working group at Khronos to define full interoperability between the two open standards. OpenCL is an emerging royalty-free standard focused on programming the emerging intersection of GPU and multi-core CPU compute through a C-based language for heterogeneous data and task parallel computing. The two APIs together will provide a powerful open standards-based visual computing platform with OpenCL's general purpose compute capabilities intimately combined with the full power of OpenGL.

"OpenGL 3.0 is a significant evolutionary step that integrates new functionality to ensure that OpenGL is a truly state-of-the-art graphics API while supporting a broad swathe of existing hardware," said Barthold Lichtenbelt, chair of the OpenGL working group at Khronos. "Just as importantly, OpenGL 3.0 sets the stage for a revolution to come - we now have the roadmap machinery and momentum in place to rapidly and reliably develop OpenGL - and are working closely with OpenCL to ensure that OpenGL plays a pivotal role in the ongoing revolution in programmable visual computing."

More details on OpenGL 3.0 will be discussed at the OpenGL "Birds of a Feather" meeting at SIGGRAPH in Los Angeles at 6PM on Wednesday August 13th at the Wilshire Grand Hotel. More details at Khronos Siggraph page (http://www.khronos.org/siggraph2008/).

elFarto
08-11-2008, 10:25 AM
The OpenGL 3.0 specification enables developers to leverage state-of-the-art graphics hardware...
...with a state-of-the-ark API.

Regards
elFarto

Chris Lux
08-11-2008, 10:26 AM
don't [censored] care anymore... don't [censored] care!

Eckos
08-11-2008, 10:32 AM
We are very pleased to see the release of OpenGL 3.0, which includes numerous features and extensions that will help us and other ISVs bring amazing gaming content to OpenGL-based platforms," commented Gavriel State, founder & CTO of TransGaming, Inc.


Doesn't he mean ashamed?

Toni
08-11-2008, 10:34 AM
I can hear the laughing at Seattle from here (and i'm in europe) :(

Zengar
08-11-2008, 10:40 AM
Me neither. ARB showed that it

a) doesn't care about the community
b) cannot do anything
c) doesn't care about the API
d) has no forward thinking

They only keep making Gl more and more bloated and ugly, despite the great ideas that were proposed. As a result: drivers become more complicated (look at the vertex array object, it is non-trivial logic for a driver developer) = no quality improvement on ATI, Intel; the API is even more irritating; GL 3.0 is only usable on DX10 class hardware = no benefit at all, for nobody. The deprecation system is nice, but they should have just removed that stuff altogether.

Hell, I won't be really disappointed if they just take ES 2.0 and make it a desktop API, it has some right design choices. But this GL 3.0 spec is a mess from GL1.x and ES ideas. Why teh hell deprecation mode? Just kill that stuff already, no one cares! And CAD people should just use the GL1.5, if they are unable to write a normal renderer. Still, I would like to hear what they have to say as SIGGRAPH

Khronos_webmaster
08-11-2008, 10:50 AM
Hi all,

A friendly reminder to please watch our language when posting.

Thanks.

Brolingstanz
08-11-2008, 10:50 AM
I see santyhamer's woot and raise it a yipee!

Lookey here:
- wglCreateContextAttribsARB with optional debug context
- MapBufferRange/FlushMappedBufferRange
- VAO
- TransformFeedback
- MS FBO/Blit with mixed formats
- A slew of required texture formats (e.g. D24S8)
- Cleaner GLSL attribute specification (in/out)
- Deprecation of lots of GLSL builtin state
- No more mixed FF/programmable pipes
- Deprecation model goodness

This strikes me as a pretty good compromise, somewhere in between Longs Peak and Mount Evans.

Now if they deperecate/remove the last of the cruftier vestiges of GL2 in the next version and bring in all of SM4 proper to the core...

AlgorithmX2
08-11-2008, 10:52 AM
I like many others who have already posted, and those who will, am severely disappointed, if not angry or even infuriated.

We waited numerous months with no word of anything as we hoped that they would deliver on their word and give us a clean API.

All we got was a list of 'features not to use anymore'

Truly, a disappointment.

knackered
08-11-2008, 10:52 AM
I swear to god, this would be the ideal time for nvidia to release their own rendering API, based on the promised object model (call it nvgl). I would start using it immediately. They did it with Cg, now do it with the actual API with Cg as its shader language.
In reality, the best evolutionary step for the ARB to have taken would have been to introduce a new API with object model implemented using the existing the old API, and encourage the slower members of the GL community to use it in preference, while still giving the ability to use the existing API directly.
I just see what they've done with GL3 as a waste of everybody's time - IHV's and CAD developers.

Eckos
08-11-2008, 11:00 AM
I swear to god, this would be the ideal time for nvidia to release their own rendering API, based on the promised object model (call it nvgl). I would start using it immediately. They did it with Cg, now do it with the actual API with Cg as its shader language.

Yeah that would be awesome. If they did I hope they release a C++ version of it. Because OpenGL needs a real C++ wrapper :(

Why didn't they just move the stuff they promised us in OpenGL 3.0 into a seperate dll/lib while stil keeping the other junk for those stupid CAD Developers

Dan Bartlett
08-11-2008, 11:02 AM
Is bugzilla going to have OpenGL 3.0 as an option to submit errors for soon?
In Table N.1, the specification lists a couple of name changes:

MAX_CLIP_PLANES to MAX_CLIP_DISTANCES
CLIP_PLANEi to CLIP_DISTANCEi

However, these haven't been changed in the document, and surely these are separate things anyway, and need updated documentation?

Also, Appendix O: ARB Extensions doesn't include the new extensions listed in registry.

Fitz
08-11-2008, 11:02 AM
And so OpenGL dies a slow death afterall.

0r@ngE
08-11-2008, 11:28 AM
Yes, OpenGL dies, and this is ARB's mistake!
That's mean I need D3D for future?
Doh! Thst's why Karmak look at D3D...

spooky_paul
08-11-2008, 11:28 AM
Aye.

So after all OpenGL will became the API for the uber (lazy)cool professional CAD developers while DirectX becomes game developer's (only)choice.

In the end the ARB struck a big blow to the other than windows development (as said before, the mac/linux dude suffers the most)

The good thing is that I've started some time ago to port my rendering system to d3d and I haven't wasted the time spent developing it.

Honestly I wish it was wasted and OpenGL 3.0 delivered...

Toni
08-11-2008, 11:29 AM
mmm.... is it a mistake or the spec says that the whole ALPHA_TEST thinggie is deprecated? :/... what's the alternative?

Zengar
08-11-2008, 11:31 AM
mmm.... is it a mistake or the spec says that the whole ALPHA_TEST thinggie is deprecated? :/... what's the alternative?

The alternative is killing fragments in the fragment shader.

Jan
08-11-2008, 11:31 AM
"discard" in the fragment-shader. Or discard OpenGL altogether.

Fitz
08-11-2008, 11:32 AM
DX10/11 are looking mighty attractive right about now, as soon as I get a vista machine its goodbye OGL. The OGL API needed a complete rewrite and it did not happen, and no amount of extensions is going to fix that.

CrazyButcher
08-11-2008, 11:38 AM
I second the suggestion of getting a ogl-spec without all the deprecated stuff. Surely it looks really bad überbloated, the way it is now. Completely against the "cleanup" mentality.

And eventhough the new objectmodel isnt there, which does feel like a punch for sure, after all the previous discussion around it, you could at least try to present the "future" better. Atm it looks really like the community was completely ignored. Even with the new obj missing, things could have been presented a bit nicer and cleaner, instead of stepping on everyones shoes, almost seeming purposely. Yes the legacy folks would be scared if only the lean & mean version exists, but you could have always made two pdfs in that year...

Which yields the question, what took so long to decide about "deprecating" stuff and adding a few new extensions, which could have been 2.2 or whatever? Its not like deprecating first, then removing is bad, it surely was the only way with the legacy that gl has. But that list could have been given out a year ago, with GL ES already similar... Really what exactly took so long?

atm it jsut looks so PR-marketing like, just adding "more", handing out a monster document of a giant api, when after all "developers" are the consumers of this product, and not some kid who just installed quad-sli cards and has vista anyway ;)

so please, please get the cleaned up spec out soon.

bobvodka
08-11-2008, 11:45 AM
You do all realise 'deprecated' doesn't 'removed' it just means 'expect this to vanish at some point' a fact which has worked so well in the past and hasn't at all lead to bloated APIs... like, say, with Java.

While something is in you HAVE to support it because while it exists, in any form, people WILL use it.

Eckos
08-11-2008, 11:45 AM
So how long til we *DO* get the stuff we were promised? I had a feeling of being let down :(

Kazade
08-11-2008, 11:47 AM
I've been looking at the spec and as long as you use a full GL3 context it looks like you do get a very streamlined API, there is a hell of a lot of fixed-function stuff that you can't use in favour of shaders.

OK, so there is no object based API, which I am gutted about, but is the spec really that bad? At least now they can deprecate and eventually remove stuff.

Every way I look at it, this is a massive improvement over 2. What I'm annoyed about is the way the ARB haven't told us ANYTHING for a year and then don't deliver the stuff they did tell us about. But I'm annoyed with the process, not the final spec.

Kazade
08-11-2008, 11:50 AM
You do all realise 'deprecated' doesn't 'removed'

From the spec: "it is possible to create an OpenGL 3.0 context which does not support deprecated features." So if you want it to, it can remove the deprecated features.

Leadwerks
08-11-2008, 11:51 AM
Are the SM 4.0 features (NVidia GEForce 8+) standard in OpenGL3 or do I still have to write a million different render paths?

Dan Bartlett
08-11-2008, 12:03 PM
Any chance of a parallel specification document that has all the deprecated features removed, so we can see what is left over?

Jan
08-11-2008, 12:05 PM
Even if you can create a context, that does not support the deprecated features, that doesn't make drivers less complicated and therefore less error-prone. The very idea of the whole rewrite (which we didn't get, of course, so the idea of less error-prone drivers was ditched too, i guess).

martinsm
08-11-2008, 12:09 PM
Are the SM 4.0 features (NVidia GEForce 8+) standard in OpenGL3 or do I still have to write a million different render paths?
No, they are not.
It seems that they are "reserved" as usual GL_ARB_xx extensions: http://opengl.org/registry/
see items 47. GL_ARB_geometry_shader4 and down.

Leadwerks
08-11-2008, 12:10 PM
[censored] that is stupid.

Time to call Intel and see what they are planning with Larabee.

I knew the news would be bad. You don't have someone cut off communication like they did and then get good news afterwards.

Korval
08-11-2008, 12:20 PM
Time to call Intel and see what they are planning with Larabee.

You keep saying that, but Larrabee will still be using the standard APIs for graphics work.

Leadwerks
08-11-2008, 12:32 PM
They are implementing them but I think the idea is you can write your own as well. Otherwise there would be absolutely no point.

HenriH
08-11-2008, 12:48 PM
Why didn't they just move the stuff they promised us in OpenGL 3.0 into a seperate dll/lib while stil keeping the other junk for those stupid CAD Developers

You seem to be misunderstanding that Khronos/ARB does not make implementations, but specifications. Putting OpenGL into seperate dlls is an implementation detail and beyond the scope of the ARB.

knackered
08-11-2008, 12:50 PM
Hi all,A friendly reminder to please watch our language when posting.
Why don't you just run us over with a tank if our protestations are so unpalatable?
The ARB are looking more and more like a communist dictatorship today.

HenriH
08-11-2008, 12:54 PM
As for someone, who has been watching the progress of OpenGL for many years even thought I haven't done any serious 3D graphics programming for some time now, OpenGL 3.0 is a bit dissapointing for me too.

Like it has been said in these threads before, OpenGL is in need of complete clean-up of the API. I was looking forward for the new object system and fixing a lot of ancient legacy like the plan seemed to be in the beginning, but alas, we did not get that. A hope is that the new upcoming versions will eventually fix these problems, but OpenGL is really lagging behind Direct3D now.

HenriH
08-11-2008, 12:56 PM
Hi all,A friendly reminder to please watch our language when posting.
Why don't you just run us over with a tank if our protestations are so unpalatable?
The ARB are looking more and more like a communist dictatorship today.

Replies like these are childish and unnecessary. Please try to master your emotions.

Leadwerks
08-11-2008, 01:00 PM
I guess the crux of all of this is that Khronos has no interest in providing an API for game graphics. I mean, they had John Carmack onboard at one point, and then I rememeber he moved his research to DirectX, at what must have been the time this new (mis)direction came to be. If they were interested in games they would have done whatever they could have to keep him interested. John has always been a fan of MS alternatives, and it's not like lack of interest on his part kept him away; he would use OpenGL3 if it wasn't detrimental to his company.

So they are saying "sure, you can use OpenGL for games, if you want" but their real interests lie elsewhere. If that is what they want to do, that's fine, but what you must understand is all the real-time developers here do not mean anything to them. They have no aspirations of high-end realtime graphics, so we should not expect it.

Chris Lux
08-11-2008, 01:02 PM
i can remember that we were told the ARB members were meeting 5 times per week (face to face or on the phone).

so what were they talking about?

HenriH
08-11-2008, 01:05 PM
I guess the crux of all of this is that Khronos has no interest in providing an API for game graphics. I mean, they had John Carmack onboard at one point, and then I rememeber he moved his research to DirectX, at what must have been the time this new (mis)direction came to be. If they were interested in games they would have done whatever they could have to keep him interested. John has always been a fan of MS alternatives, and it's not like lack of interest on his part kept him away; he would use OpenGL3 if it wasn't detrimental to his company.

So they are saying "sure, you can use OpenGL for games, if you want" but their real interests lie elsewhere. If that is what they want to do, that's fine, but what you must understand is all the real-time developers here do not mean anything to them. They have no aspirations of high-end realtime graphics, so we should not expect it.


Khronos OpenGL ARB is a community of member parties with different interests. Sometimes the interests may collide. This time we see that the outcome of these colliding interests was not to radically push the API forward, unlike the original plan was. From a game developer's point of view, a pity.

cass
08-11-2008, 01:07 PM
You keep saying that, but Larrabee will still be using the standard APIs for graphics work.

My understanding is that Larrabee will have a software OpenGL implementation on top of their native interface, but you could program to the native interface or use a 3rd party library that is just as 'to the metal' as the OpenGL implementation.

Chris Lux
08-11-2008, 01:08 PM
even from a researchers point of view a pity... such a big pitty to lookout for the only valid alternative.

RenderBuffer
08-11-2008, 01:11 PM
The Siggraph class "OpenGL: What's Coming Down the Graphics Pipeline" on Wednesday should be interesting. Anyone going?

Jan
08-11-2008, 01:18 PM
All this talk about Larrabee, Cuda/CTM and being able to write/extend APIs as you like is such a non-sense!

The fact is Larrabee will be out sometime 2009/2010. Another fact is that writing your own software-rasterizer is a lot of work. And even if others write that rasterizer, they will charge you.

All that does not solve the problem, that RIGHT NOW YOU HAVE NOTHING !

Debating about what you MIGHT be doing 2 years from now, doesn't solve the problem, that RIGHT NOW the "alternative" OpenGL 3 sucks. So either you live with that, or you switch to D3D, but there are no other options.

If i am going to wait for Larrabee, i could also just wait for GL4 (which will be GL3 + some extensions promoted to core).

Jan.

Eddy Luten
08-11-2008, 01:33 PM
Jan, the good thing about it is that Larrabee is basically an extension card stuffed with x86's so you can already start programming a core without having to wait for an API. I have high expectations for this hardware but I am not holding my breath as I did with GL3.

Lindley
08-11-2008, 01:42 PM
...I was really looking forward to an object-based API...

HenriH
08-11-2008, 01:48 PM
The fact is Larrabee will be out sometime 2009/2010. Another fact is that writing your own software-rasterizer is a lot of work. And even if others write that rasterizer, they will charge you.

I look forward for open sourced software-rasterizer for Larrabee.

dletozeun
08-11-2008, 01:51 PM
<off topic> Talking about clean up, the OpenGL site should also be cleaned up, It has always been a mess IMO </off topic>

knackered
08-11-2008, 02:13 PM
Replies like these are childish and unnecessary. Please try to master your emotions.

Oh well, the Finns have never exactly been renowned for their sense of humour.

Rob Barris
08-11-2008, 03:26 PM
I see santyhamer's woot and raise it a yipee!
Now if they deperecate/remove the last of the cruftier vestiges of GL2 in the next version and bring in all of SM4 proper to the core...


On the topic of "SM4 proper", do you have some sense of what you think is missing ?

glDan
08-11-2008, 03:41 PM
You really have to wonder what went on behind closed doors, while they were discussing the matter at hand.

There really isn't any other choice but openGL for people using non-window machines.
The mac, linux, and PS3(?)/Wii(?) people are still left using the same old API that they have been using before, and they still have to jump through hoops to get all functions working on all hardware.

Or in other words, this whole 'annoucement' was nothing more than a big sign that read : " Nothing to see, continue on with what you did in the past. Hope it works, see ya next year!"

It truely is a shame that they couldn't be upfront with the community with what was going to happen, or should I say what wasn't going to happen. :(

skynet
08-11-2008, 03:47 PM
Has everybody seen this yet?
http://www.opengl.org/registry/specs/EXT/direct_state_access.txt

It looks like this is (partly) that object model we tried to get. But I really have to hold back my tears when reading a new function name like this:

NamedRenderbufferStorageMultisampleCoverageEXT(... )

Why did they pull this off? Why do it that way? Why trying to further extend a monstrosity (where you have to think of all sorts of side effects you might introduce to the already existing extension environment) instead of just creating some new, clean, elegant and modern API?

NOBODY would have complained if there was just a new opengl3x.dll that lives happily besides the legacy opengl2x.dll. And while 2.x would be still maintained but not further extended, everyone had the time to adopt GL3.

FBO is 3 years old and still not working 100% on neither nVidia nor ATI cards. How long will it take, until EXT_direct_state_access is working? This whole stunt lacks any comparison.

Korval
08-11-2008, 03:48 PM
do you have some sense of what you think is missing ?

Are you serious? What is missing compared to SM4? You need to ask someone what that is, when you can just list the features side by side and cross off the ones that both don't have?

With people like this on the ARB, it's no wonder GL "3.0" turned out this way.

Brolingstanz
08-11-2008, 03:49 PM
In particular I had geometry shaders in mind when I wrote that, Rob.

Rob Barris
08-11-2008, 03:57 PM
In reading the direct state access extension, it seems to me that the majority of calls simply act as if there was a private binding point that does not affect drawing state - they look up the object and they set the state, without affecting drawing state or binding points.

As with any paper spec, seeing an implementation running can provide a good existence proof of its efficacy. If one were to become available soon, that would be an important signal that it's doable and low risk.

(that phrasing "The EXT_direct_state_access driver is +2.5% larger." could lead one to believe that it already exists in some form).

knackered
08-11-2008, 04:42 PM
Has everybody seen this yet?
http://www.opengl.org/registry/specs/EXT/direct_state_access.txt
funniest thing i've seen all day.
every function that relies on a client selector gets a new entry point.
and to think, they didn't want to change the api.

MZ
08-11-2008, 04:56 PM
Hello Slashdot!

*waves hand*

Rob Barris
08-11-2008, 04:57 PM
The approach in the DSA extension doesn't invalidate any existing code.

The approach in Longs Peak invalidated all of it.

Groovounet
08-11-2008, 05:05 PM
I first have a look on GLSL 1.3 and I thought first: "So great, there is the SM4 feature too!" then I arrived on the texture function and I thought "What the heck is going on!".

There is no word to tell you how much I'm disappointed by this OpenGL 2.2 specification. No actually for OpenGL 2.2 specification it would have been great, for OpenGL 3.0 specification, it's just a shame. You made us wait 2 years with so much great concepts for OpenGL 3 to end up with this...? Who did this? Is it marketing reasons? I can't see a clue on this, it doesn't make any sens.

Even if OpenGL 3 would not have been at the level of D3D10 in terms of features, I'm sure we would have wait for this and start with what we got.

In some ways, I expect nVidia to create it's own API but I just hope there are not behind this mess... I really wonder that's not the case but some will pay for this for sure.

Anyway, I fed up with OpenGL now, it's time to go to something new I guest.

The greatest, and we are speaking about this here, always been done by people... maybe someones need to hear them.

Congratulation!

Khronos_webmaster
08-11-2008, 05:13 PM
We have resolved our bottleneck and should be good to go with the rest of slashdot.

Korval
08-11-2008, 05:16 PM
The approach in the DSA extension doesn't invalidate any existing code.

The approach in Longs Peak invalidated all of it.

Yes, but Longs Peak would have been implemented by everyone. This extension will not be implemented by ATi or Intel until it becomes core.

skynet
08-11-2008, 05:30 PM
The approach in the DSA extension doesn't invalidate any existing code.

The approach in Longs Peak invalidated all of it.

And this is the all-wrong thought. If I had a 10-year-old application, of course I wanted it to continue working. This is what the old opengl2.x dll would have been for. Install the new GL driver, but it would still work. NOTHING is invalidated.

Now, if I decided to switch to GL3, I would NEVER expect to just link against opengl3.lib without any work. Instead I knew that I probably have to rewrite the whole thing, because that old application is not _architected_ to meet the needs of a modern gfx card. Instead you just need a clean redesign. I'd had to throw out display lists and switch to a VBO based design. I had to throw out fixed function stuff and write shaders instead. I must not use that SELECTION/FEEDBACK stuff anymore... etc. Even those mighty CAD companies WILL have to rewrite their stuff some day. And I _bet_, they'll use DX that day.
It would have been just honest to draw a line and introduce an all new API instead of what we got now.

Michael Gold
08-11-2008, 06:28 PM
The approach in the DSA extension doesn't invalidate any existing code.

The approach in Longs Peak invalidated all of it.


To be fair, LP was "opt-in" so no existing code would break.

foo bar
08-11-2008, 06:58 PM
[censored] [censored] [censored] [censored] [censored].

Korval
08-11-2008, 07:10 PM
[censored] [censored] [censored] [censored] [censored].

Don't hold it back, Foo bar, tell us how you really feel ;)

scratt
08-11-2008, 07:56 PM
So how is this going to affect OpenCL?

Perhaps I am missing something but isn't CL going to now have to be interfaced to a pretty old API, instead of a bright new shiny one - which could then be integrated at inception, rather than patched to an old system?

Or is the buzz over OpenCL, and it stealing GPU cycles for other stuff, now also causing Khronos to dilute it's attention to the point it breaks both?

Or is CL and 3.0 going to be the next big milestone?

barthold
08-11-2008, 08:12 PM
What happened to Longs Peak?

In January 2008 the ARB decided to change directions. At that point it had become clear that doing Longs Peak, although a great effort, wasn't going to happen. We ran into details that we couldn't resolve cleanly in a timely manner. For example, state objects. The idea there is that of all state is immutable. But when we were deciding where to put some of the sample ops state, we ran into issues. If the alpha test is immutable, is the alpha ref value also? If we do so, what does this mean to a developer? How many (100s?) of objects does a developer need to manage? Should we split sample ops state into more than one object? Those kind of issues were taking a lot of time to decide.

Furthermore, the "opt in" method in Longs Peak to move an existing application forward has its pros and cons. The model of creating another context to write Longs Peak code in is very clean. It'll work great for anyone who doesn't have a large code base that they want to move forward incrementally. I suspect that that is most of the developers that are active in this forum. However, there are a class of developers for which this would have been a, potentially very large, burden. This clearly is a controversial topic, and has its share of proponents and opponents.

While we were discussing this, the clock didn't stop ticking. The OpenGL API *has to* provide access to the latest graphics hardware features. OpenGL wasn't doing that anymore in a timely manner. OpenGL was behind in features. All graphics hardware vendors have been shipping hardware with many more features available than OpenGL was exposing. Yes, vendor specific extensions were and are available to fill the gap, but that is not the same as having a core API including those new features. An API that does not expose hardware capabilities is a dead API.

Thus, prioritization was needed, and we made several decisons.

1) We set a goal of exposing hardware functionality of the latest generations of hardware by this Siggraph. Hence, the OpenGL 3.0 and GLSL 1.30 API you guys all seem to love ;)

2) We decided on a formal mechanism to remove functionality from the API. We fully realize that the existing API has been around for a long time, has cruft and is inconsistent with its treatment of objects (how many object models are in the OpenGL 3.0 spec? You count). In its shortest form, removing functionality is a two-step process. First, functionality will be marked "deprecated" in the specification. A long list of functionality is already marked deprecated in the OpenGL 3.0 spec. Second, a future revision of the core spec will actually remove the deprecated functionality. After that, the ARB has options. It can decide to do a third step, and fold some of the removed functionality into a profile. Profiles are optional to implement (more below) and its functionality might still be very important to a sub-set of the OpenGL market. Note that we also decided that new functionality does not have to, and will likely not work with, deprecated functionality. That will make the spec easier to write, read and understand, and drivers easier to implement.

3) We decided to provide a way to create a forward-compatible context. That is an OpenGL 3.0 context with all deprecated features removed. Giving you, as a developer, a preview of what a next version of OpenGL might look like. Drivers can take advantage of this, and might be able to optimize certain code paths in the forward-compatible context only. This is described in the WGL_ARB_create_context extension spec.

4) We decided to have a formal way of defining profiles. During the Longs Peak design phase, we ran into disagreement over what features to remove from the API. Longs Peak removed quite a lot of features as you might remember. Not coincidentally, most of those features are marked deprecated in OpenGL 3.0. The disagreements happened because of different market needs. For some markets a feature is essential, and removing it will cause issues, whereas for another market it is not. We discovered we couldn't do one API to serve all. A profile encapsulates functionality needed to meet the needs of a particular market. Conformant OpenGL products may implement one or more profiles. A profile is by definition a subset of the whole core specification. The core OpenGL specification will contain all functionality, including what is in a profile, in a coherently designed whole. Profiles simply enable products for certain markets to not ship functionality that is not relevant to those markets in a well defined way. Only the ARB may define profiles, individual vendors may not (this in contrast to extensions).

5) We will keep working on object model issues. Yes, this work has been put on the back burner to get OpenGL 3.0 done, but we have picked that work up again. One of the early results of this is that we will work on folding object model improvements into the core in a more incremental manner.

6) We decided to provide functionality, where possible, as extensions to OpenGL 2.1. Any OpenGL 3.0 feature that does not require OpenGL 3.0 hardware is also available in extension form to OpenGL 2.1. The idea here is that new functionality on older hardware enables software vendors to provide upgrades to their existing users.

7) We decided that OpenGL is not going to evolve into a general GPU compute API. In the last two years or so compute using a GPU and a CPU has taken off, in fact is exploding. Khronos has recognized this and is on a fast track to define and release OpenCL, the open standard for compute programming. OpenGL and OpenCL will be able to share data, like buffer objects, in an efficient manner.

There are many good ideas in Longs Peak. They are not lost. We would be stupid to ignore it. We spent almost two years on it, and a lot of good stuff was designed. There is a desire to work on object model issues in the ARB, and we recently started doing that again. Did you know that you have no guarantee that if you change properties of a texture or render buffer attached to a framebuffer object that the framebuffer object will actually notice? It has to notice it, otherwise your next rendering command will not work. Each vendor's implementation deals with this case a bit differently. If you throw in multiple contexts in the mix, this becomes an even more interesting issue. The ARB wants to do object model improvements right the first time. We can't afford to do it wrong. At the same time, the ARB will work on exposing new hardware functionality in a timely manner.

I want to ask you to take a deep breath, let this all sink in a bit, and then open up the OpenGL 3.0 and GLSL 1.30 specifications we just posted that have all new stuff clearly marked. Hopefully you'll agree with me that there's quite a lot of new stuff to be excited about.

http://www.opengl.org/registry/doc/glspec30.20080811.withchanges.pdf
http://www.opengl.org/registry/doc/GLSLangSpec.Full.1.30.08.withchanges.pdf

This is certainly not the end of the OpenGL API. OpenGL will evolve and will become better with every new revision. I welcome constructive feedback.

Regards,
Barthold Lichtenbelt
OpenGL ARB Working Group chair

zed
08-11-2008, 08:16 PM
ive quickly put together a little ditty
http://www.zedzeek.com/MUSIC/ogl.mp3

I quite like the 2nd part, 1:30-> does it remind anyone of anything (dont wanna be accused of plagarism) I may use it in a song

Ilian Dinev
08-11-2008, 08:22 PM
I prefer to look at things from the bright side: hey, now OpenGL2.1 doesn't look so bad :).
/ducks

I can't comprehend why the CAD community would need OpenGL3 in this crippled state, when deprecated API will be butchered soon. I've had my share of porting/upgrading to a newer/another API, and drastic changes were more welcome than subtle unreliable ones. The 2.1 will continue to be maintained, so stay there. Or is it marketing reasoning?

The spec seems to make GL3 look worse than it is. An OpenGL3-only spec is due. Plus info which features really are available in SM2/3 cards - like the varying interpolation type (wasn't it introduced in SM4? ).

Semantics in the shader code, and being able to mix+match are important things that should have been there. Compiler API, too.

Oh well, 2.1 stays, cgc.exe is there with Radeon support, ATi fix things quickly, nVidia don't break too much code with every new driver, we have 5 years of proven working code, users aren't crazy about new gpus - so it's not that bad that ATi don't provide geometry-shaders, Intel's current stuff is unusable for games (thus isn't a target), larawasp will be late and expensive and as common as a physics-card, raytracing takes weeks to prepare the kd-trees of scenes and will be even less common than larabee. And we don't need to hope/wait for improvements in GL anymore!
Those that jump ship for DX10.... good luck with the market, guys. Hope the userbase is ready when you ship a title.

Jon Leech (oddhack)
08-11-2008, 08:28 PM
Is bugzilla going to have OpenGL 3.0 as an option to submit errors for soon?

Yes, it does now.


In Table N.1, the specification lists a couple of name changes:

MAX_CLIP_PLANES to MAX_CLIP_DISTANCES
CLIP_PLANEi to CLIP_DISTANCEi

However, these haven't been changed in the document, and surely these are separate things anyway, and need updated documentation?


They are reusing the same piece of state for a different purpose when running GLSL 1.30 shaders, thus the aliased names. Once the fixed-function pipeline is fully deprecated the old names will go away, until then both names exist.


Also, Appendix O: ARB Extensions doesn't include the new extensions listed in registry.

Yes, it's somewhat out of date. Not on the top of my priority queue TBH, but I'll get to it.

sqrt[-1]
08-11-2008, 08:31 PM
I have to say I am disappointed. I was all psyched up to do a GLIntercept2.0 - but it seems that will be the sole domain of gDebugger now. I also had lots of ideas for other tools I wanted to write.

I think I will take a break - learn some more of the competing API - and possibly come back when the dust settles.

As other people have said - I fail to see why CAD developers need a new API - could they not have kept it separate like OpenGL-ES 1.1 vs 2.0?

glDan
08-11-2008, 08:32 PM
ive quickly put together a little ditty
http://www.zedzeek.com/MUSIC/ogl.mp3

I quite like the 2nd part, 1:30-> does it remind anyone of anything (dont wanna be accused of plagarism) I may use it in a song
LOL zed.
Do you do requests? ;)


Barthold, thanks for the response.
It still is depressing though.
Most all of us just expected allot more. :(

Mark Kilgard
08-11-2008, 08:36 PM
Re: http://www.opengl.org/registry/specs/EXT/direct_state_access.txt

funniest thing i've seen all day.
OpenGL extension specifications aren't generally contenders for the designation "funniest thing I've seen all day."

I'm clearly going to have to re-read it again; I must have missed the funny part.


every function that relies on a client selector gets a new entry point.
That's certainly the gist of the extension.

Saying "client selector" isn't quite right however. Just one selector is client state (glClientActiveTexture); all the others are technically considered OpenGL server state.

And while most of the functions introduced are new, when a suitable indexed version of a function existed, such as using glEnableIndexedEXT for glEnable(GL_TEXTURE_2D), the existing indexed function was used instead of introducing a new one.

I hope this helps.

- Mark

sqrt[-1]
08-11-2008, 08:36 PM
I should also add that one of the things I think I will miss is this community - I don't know of any other technical forum that has the character and the relevancy as this one.

pudman
08-11-2008, 08:51 PM
Thus, prioritization was needed, and we made several decisons.

I didn't see a number associated with your decision to keep all decisions from the public. We would still complain but it definitely would have staved off this bomb.

I even see a tiny bit of hope: It only took you one day to "prepare" the purple colored spec which already gives more insight into at least the real changes from 2.1. But we're still missing the back story on a lot of the decision making processes. "Profiles" seem nice in a fun world of incompatibility but relying on the ARB for them? Funny!

Ilian Dinev
08-11-2008, 08:54 PM
I'd have given GL3 a chance if these 3 features (which do exist in hardware) weren't butchered:
- alpha-test. Ok, can be put in the shader. Fork the shader in two. SM4 and SM5 cards will need to generate a second shader for this?
- GL_CLAMP on texture-wrapping. Again can be put in the shader, and may be caused again by SM4 and SM5 cards' limits.
- wide lines. Uhoh. So-long, green grass, explosion/fast-particles gfx, perfectly-antialiased edges, and future creativity.

Wide lines can't be created optimally, especially when there's no geometry shader. And afaik, CAD apps do need them.

pudman
08-11-2008, 09:01 PM
OpenGL extension specifications aren't generally contenders for the designation "funniest thing I've seen all day."

I found issue #27 quite amusing: What feedback has Id Software given on this extension?

Carmack: "This should have happened a long time ago."
Cass: "It's a lot of entry points. Can this be put into numerical terms?"

That summed up the extension for me. Korval nailed it: It doesn't really matter until it's core because ATI/Intel won't support it. But hey, go get 'em nvidia.

dorbie
08-11-2008, 09:02 PM
This situation is a bit of a tragedy. If anyone here thinks that the big players can't wait to throw out the legacy cruft in the spec then they're sorely mistaken. Read the deprecation model section in the spec. They're itching to get rid of this crap.

Look at how clean OpenGL ES 2.0 and the associated glslang is.

It suspect it was excessive caution over how developers might react that stopped this happening and it looks like it backfired big time.

Apply the deprecation to this and it's pared to the bone & very like OpenGL ES 2.

As for the driver rewrite, drivers have already been rewritten many times over, the biggest part of your driver is geared towards cached index array rendering and shader compilers, and the compilers are on their 5th generation or so.

You don't need a rewrite to throw the cruft out. For almost all implementations the fixed function state driven engine sits there as a code stitcher that you can throw away, hardware forced this a LONG time ago. The legacy data wrangling is software.

Oh and Larabee? I think we have a Godwin's law for graphics now, relating to the time it'll take some bullshitter to bring up Larabee in any discussion on graphics as the solution to the problem.

The perception here is the reality, I hope someone stands up at the BOF and says we're going to have 3.1 in a month and it'll excise every deprecated feature in this spec, and make objects core, naming your own handles is deprecated anyway.

Clean 3.1 drivers could probably be pushed out inside a month from a 3.0 driver, they should just do it ASAP and make today look like a bad memory, if it's not too late.

Rob Barris
08-11-2008, 09:10 PM
The very point of the forward compatible context mechanism in 3.0 is to allow the developer to operate their app in a mode where deprecated functions are disabled (for use as a porting tool).

So at a point in time where GL 3.0 drivers are available, if you are thinking of updating your app for the anticipated post-3.0 release where deprecated functions are actually removed - you can start that work using 3.0 and see it run.

Some implementations may actually realize performance benefits in that mode since elimination of legacy functions can also lead to elimination of state tracking for those functions.

dorbie
08-11-2008, 09:18 PM
Yes, but from the spec you have to discern the intent before you know what the hell is going on. Which has led to this political disaster. You need to put a stake through the heart of that 2.x cruft by releasing a 3.1 spec ASAP that makes it clear this stuff is gone. Not *might* be gone in some future release, but end of life, pining for the fjords. Start potring or die. GONE in the next revision.

From aesthetic and educational purposes alone it's justified.

PkK
08-11-2008, 09:20 PM
3) We decided to provide a way to create a forward-compatible context. That is an OpenGL 3.0 context with all deprecated features removed. Giving you, as a developer, a preview of what a next version of OpenGL might look like. Drivers can take advantage of this, and might be able to optimize certain code paths in the forward-compatible context only. This is described in the WGL_ARB_create_context extension spec.


Will there be a GL3-without-the-deprecated-stuff spec?
Will there be a GLX_ARB_create_context? If yes, when?

Philipp

Korval
08-11-2008, 09:22 PM
We set a goal of exposing hardware functionality of the latest generations of hardware by this Siggraph. Hence, the OpenGL 3.0 and GLSL 1.30 API you guys all seem to love

Well that failed pretty miserably, didn't it.

D3D 10 had one major defining feature: geometry shaders. Oh, it had other stuff to be sure. But that was the "big deal" of D3D 10; it was the main thing that differentiated it from D3D 9.

So, where are they in OpenGL "3.0"? Oh, an extension? How is that different from yesterday?!

If GL "3.0" doesn't even have the most widely mentioned (if not entirely useful) feature of D3D 10, how can you possibly say that you met the goal of "exposing hardware functionality of the latest generations of hardware?" Or do you expect partial credit for half-finished work?

GL "3.0" fails on two levels. It fails to do what Longs Peak was supposed to. And it fails to do what a regular core revision was supposed to (add support for new hardware).


Second, a future revision of the core spec will actually remove the deprecated functionality.

Longs Peak was the second effort to remove functionality from OpenGL, to modernize the API. It failed. I would make an entire list of the failures that the ARB has perpetrated on OpenGL, but I don't think this web page is long enough.

I would love to hear one reason, just one good reason, why we should trust anything the ARB has to say about removing functionality from OpenGL. Or anything for that matter.

There is no trust relationship between OpenGL users and the ARB anymore. The only thing we can trust the ARB to succeed at is spectacular failure.


An API that does not expose hardware capabilities is a dead API.

An API that doesn't provide a proper hardware abstraction isn't exactly lively either.


In January 2008 the ARB decided to change directions.

So, when was it decided that you would tell us nothing about it? I'm guessing that it was at the same meeting, but it'd be nice to have confirmation on that.


Those kind of issues were taking a lot of time to decide.

Oh, I imagine they were. I considered the options myself, just from the scraps of info that the ARB released. They're tough.

However, there's one very important thing that you guys failed to get: we don't care that much. You had the big things covered. If the little details were taking a lot of time, then you simply get both sides to deliver a position paper on the subject, take a vote, and have both sides agree to accept the results.

People weren't going to abandon GL 3.0 if it caused some minor inconvenience, like a proliferation of state objects. And because the API was object-based, if you made some poor decisions, you could always fix them by introducing a new object (of the same "type") that fixed the problem.

Basically, you're saying that the ARB would rather fail spectacularly than largely succeed and possibly have some small failures.

Take the C++0x Working Group as a proper example of how these things get done. There was a proposal about "uniform initialization". It was proceeding well, getting input from contributers, and so forth. Then, someone realized that it effectively made "explicit" meaningless. There was a big argument, because part of the intent of having uniform initialization is that it works the same all the time, and explicit gets in the way of that. One side wrote a big paper supporting his side. The other side wrote a paper supporting their side. A vote was taken. One side won, the other lost. Everyone accepted the outcome and moved on towards a C++0x specification.

If they were the ARB, by the way you describe the "process", then the two sides would have kept arguing for months until there was no time to get the proposal in any form into the spec, and thus it would be cut. That's not acceptable.


However, there are a class of developers for which this would have been a, potentially very large, burden. This clearly is a controversial topic, and has its share of proponents and opponents.

Yes, but the overriding concerns outweigh any of those arguments: OpenGL is too complicated a spec for someone to reasonably implement and maintain without excessive effort. That is, more effort than it is really worth. That was the #1 reason that Longs Peak was started to begin with. To make an OpenGL that would be much more implementable and maintainable.

Since that was design goal #1, it automatically trumped all other concerns. In a disciplined design setting, suggestions that would go against the primary design goal of the whole thing would be thrown out a priori, because such concerns would have already been listened to back when it was decided to embark on the process.


That will make the spec easier to write, read and understand, and drivers easier to implement.

Wow, hey, I remember the ARB talking about something that would make the spec simpler and make drivers easier to implement. What happened to that? Oh yeah, you decided not to. Twice! :mad:

Is the whole "lack of trust" thing starting to sink in? Fool me once, same on you; fool me twice, shame on me.


Hopefully you'll agree with me that there's quite a lot of new stuff to be excited about.

See, it's funny. We had access to all of that yesterday, through extensions and whatnot. Others of us had D3D10 that we could have used to access those same features.

OpenGL 3.0 wasn't supposed to be about functionality; it was about form. We were supposed to have an API that could run on R300/NV30 hardware, that was simple enough to implement to actually trust that Intel would do so, and so on. That was what got people excited about GL 3.0, and it's gone now.

Is GL "3.0" going to help OpenGL users get better drivers? No; in fact, with all the added complexity of profiles and so forth, drivers will be worse than ever. Will it remove needless overhead? No; you still have to use object names (which requires a mapping table and other overhead to use), and you still have to bind them to change them (experimental extensions not withstanding).

In short, nothing that was ever promised with OpenGL 3.0 came to pass.


We can't afford to do it wrong.

If you can afford to do nothing, then you can afford to do it wrong.


This is certainly not the end of the OpenGL API. OpenGL will evolve and will become better with every new revision.

OpenGL as an API is dead, not because of lack of functionality, but because nobody who uses it does so because they want to use it. The only rational reason to use OpenGL over D3D now is because you have to. If you're developing a Linux or MacOS product. Or if you have need of some esoteric feature that other APIs don't support. Or if you've got a poorly written codebase with GL commands everywhere that you can't afford to rewrite. Anyone else who uses OpenGL over D3D is making a statement against Microsoft (and thus not preferring GL), completely ignorant of the alternatives, or simply a fool.

Nobody uses OpenGL because they like it, outside of a very small number of hobbyists. And that's something that the ARB should have been paying more attention to.


I welcome constructive feedback.

A quote comes to mind: "You know, in certain older civilized cultures, when men failed as entirely as you have, they would throw themselves on their swords." That's about as constructive as I can get.

Eventually, it's time to stop the CPR. It's time to accept the inevitable. Time to type "GG" and accept your shame. Stop trying to breath life into that which is clearly dead. OpenGL was once a proud beast, full of life and vigor. But now it's rabid and has to be put down. Better for it to happen quickly and by those who loved it than to watch it waste away.

"It's dead, Jim."

dotLogic
08-11-2008, 09:24 PM
I read it all. Frustration is understandable, due to promises made for the past 2 years. However this is not a HUGE tragedy. In conclusion it just means more work for programmers.

This was clearly handled badly in terms of communication, specially the lock down. Being an open organization all meetings should have transcripts, and that way we would know what happens behind closed doors, at least that would allow us to understand the rational behind certain decisions and who is opposed certain features.

I take from rbarris comments that basically OGL 3.0 is the maintenance of the "Status Quo", OGL3.0 is a competitor to D3D, however the lag of features for real-time developers using either API is pretty much the same. If you want the advanced feature you are dependent of drivers and extensions and have to jump thru a couple extra hoops to get the features you get for free in a D3D context.

Basically what people wanted was:

a) A simpler API that could have a fresh start with new drivers done from scratch.

I think the vendors looked at this and thought - hum writing new drivers does not come cheap, and we still have to support all the legacy code anyway, plus we already have to do D3D10+ that covers 85% of the real-time market anyway. Let's just do what we do every time we need to add new features to OGL, create a new context that relies on extensions - this way, those that want to move forward can use this and those that don't, don't have to implement the extension. This basically translates into a new path to be worked around by app programmers = more work for us, and less work for driver makers. So this makes some twisted sense.

You can bet that 3D application companies like autodesk, luxology and others that have advanced OpenGL application like Maya, Mudbox, Modo are not very happy with the way this turned out. At the same time certain groups within autodesk (the CAD/VIZ devision) are content with it, since they don't have to radically change their base code.

Since the main driver makers are part of the group, they share part of the blame.

b) A cleaner, leaner API that does not have 5 ways of doing the same thing.

Well that already exist up to a point and it's called OpenGL ES 2.x you can use it on the desktop already but since it is only a wrapper of OGL 2.x code anyway it may not make much sense. However if you want a leaner, cleaner way of writing you OGL application that only uses shaders you can use this API syntax instead. Does not make sense performance wise, but it's a possibility. In the end OGL3.0 initially promised less hurdles to be jumped and this spec ends up adding a couple more.

Basically what this means is instead of shrinking your code, you now have an extra context to support and basically add a couple of thousand lines to your code if you want to support a OGL3 context in addition to any other context you may have in your engine.

In conclusion, people are disappointed not so much due to the technical details. In one way or the other there is an extension or workaround that enables feature X to be done on OGL3.0 by adding complexity to the mix, and reducing you target market to people that have cards and drivers that actually can run this kind of context. We wanted a OGL on a diet (i think even the khronos ppl) and it turns out that all it was done these past two years is talk about loosing weight while at the same time eating cake :)

Not enough fat was cut, i blame the drive makers for that, not enough was added in a proper way, DSA extension should be central to a new object programming model. There was not enough courage from the board members to break away from the legacy code like it was done from OGLES1.x to OGLES2.x Instead a new getto was created for OGL3.x applications. This specification increases fragmentation of code, and in all aspects is a mess to handle. It will get you from point A to point B just like D3D you just need to code a few hundred extra plumbing lines of code, and pray that the hardware and driver can handle your code correctly.

In the end it would be nice to know why things turned out like they did. Since the initial vision was much more ambitions. I think it is one of those cases where...

"You want answers?!!"
"I want the truth!!"
"You can't HANDLE the truth..!!"

Edit: After reading Barthold's version of the truth i really almost can't handle it. Why didn't you come up with this in Jan08? Even earlier, at the time the object model problems began, there is allot of smart people on these forums *i exclude myself* that could work on the problem, and possibly present alternatives.

I think the conclusion here, is khronos needs to communicate more, and be a lot more open with the communities with witch it should write open standards, and not simply present open standards that do not please the majority of the community.

I hate to admit it it but Korval is right, nobody like to use OGL as it is. It's a fat API with a programming model from the early 90s. But if you want cross-platform then you have to use it. And there are other reasons why one has to support it, but it's not exactly a joy to use or maintain. OGL needs to turn into the original OGL3.x vision, and it needs to do it FAST, or face extinction in less than a decade.

Valion
08-11-2008, 09:25 PM
This is just kind of insane. Talking about MS vs. the ARB is a joke, since the ARB just commited suicide. The most insulting thing possible is an ARB member asking for future input, after outright killing OpenGL. Then again, if you're going to run an organization with a small amount of cutting edge members, and a huge amount of members who are based on nothing ever improving, what can you expect.

"Hay, want to keep supporting a 7 year old piece of trash?? ?? What? No? I don't understaaaaaaand!"

Seriously, it's impossible to see this as anything other than a joke. OGL v2 was the first complete failure that caused most people to flee. But this is something different...whole platforms are being abandoned.

I don't know why there's even any discussion about NVidia, or ATI, or Intel, or any other company related to graphics. The ARB's core audience are now entirely GL 1.x-related CAD companies.

Several platforms use 'GL-like' interfaces, rather than anything actually connected to GL, and we now know why.

Any project with a 'Khronos' member involved should be abandoned immediately. There is no future, obviously. It seems really difficult for Khronos members to understand it, but they've ended OpenGL.

OpenGL is dead. Khronos killed it.

dotLogic
08-11-2008, 10:23 PM
Any project with a 'Khronos' member involved should be abandoned immediately. There is no future, obviously. It seems really difficult for Khronos members to understand it, but they've ended OpenGL.

OpenGL is dead. Khronos killed it.

Khronos did not kill OGL with Longs Peak, but it certainly did not bring it out of the coma.

Why would a game programmer pick OGL3.x instead of D3D10+ in platforms where both are available?

I don't have a good answer for that.

The khronos group has delivered good efforts, Collada and OGLES2.x are examples of that, so i would not flee just yet. I have big hopes for OpenCL but i certainly hope that group opens up and communicates with everyone that is interested.

Smokey
08-11-2008, 10:30 PM
I can understand (and I must admit, at first - I even shared) the frustration and infuriation everyone is feeling. But I seriously suggest you look at things carefully, and realistically.

OpenGL is not dead, and while OpenGL 3.0 isn't perfect - it's the best the ARB could come up with considering complete and utter internal mismanagement, miscommunication, and over-all poor execution.

OpenGL 3.0 fails to deliver many things (object model, purely resident buffers, geometry shaders, etc) - but ultimately it is a small improvement for OpenGL developers.

From a driver perspective, the standard OpenGL 3.0 profile is still going to be as bloated/messy as OpenGL 2.1 - but if you read the spec carefully, you'll notice practically ALL of the fixed function pipeline is deprecated, and then some - which should be removed in OpenGL 3.1 - and while this isn't happening 'right now' as we all hoped it would, it will be happening. Until then, we have the forward compatible context, which with a bit of luck - will have their own driver with all the optimizations we would've hoped for from not having to deal with the legacy cruft.
(I'm yet to see GLX info... were the open source people not kept in the loop?)

No, it's not everything we hoped for - and yes, the ARB have lost our trust - but OpenGL is not dead, it's just not there yet.

I have to agree with Dorbie though, OpenGL 3.1 needs critical priority - and needs to be shipped SOONER rather than later - I'm afraid the bad press, and miscommunication may have damaged OpenGL more than we realize.

In short, calm down - and read the spec again - especially appendix E.

Mavoubate
08-11-2008, 10:30 PM
I am not sure if the ARB went in the right direction with OpenGL 3.0. OpenGL users are usually very conservative. They usually like to use old functionnalities (even if they should not). I am a OpenGL driver developper and believe me, OGL applications are most of the time doing very terrible things. They do those terrible things because the OGL API is quite old and it allows doing those terrible things. So in a way, I'm happy that the API gets cleaned up, but I doubt it was the right decision.

What is now the advantage of using OGL over D3D? OGL XP multi-monitor support used to be better than D3D support but this advantage is gone with Vista. The biggest advantage of OGL has always been that application writers knew that their applications won't need a total rewrite every year. They'll let OGL driver developpers do all the difficult job of supporting all the old features and all the old data path. That's a very big advantage for most application writers. Of course, it makes driver developper's live more complicated, but we can deal with that (it looks like we`ll have to support OGL 2.0 and 3.0 for a while anyway).

Now, application writers will discover that this advantage just disapeared, since a lot of old functionnalities they used are now deprecated. It's not clear how long those features will be supported. So know, they're probably thinking: Well, I will have to re-write the rendering pipe of my application anyway, so why wouldn't I switch to D3D?

That's why I think that OGL trying to copy D3D is a bad idea. Both APIs used to have their own advantages. Now, other than being cross-platform, I don't see what OGL does better than D3D.

Rob Barris
08-11-2008, 10:43 PM
Some OpenGL paths that are possible for apps to take:

a) stay on GL 2.x - apps keep working for as long as GL 2.x drivers are available.

b) move to 3.0 with few or no source changes - apps keep working as long as 3.0 drivers are available. Use of new functionality is simplified by way of integration into core.

c) go beyond 3.0 as needed - eliminate usage of deprecated features in order to comply with future releases.

A key point that is getting missed is that vendors can choose which revisions to support (and for how long to support them). On 3.0 and later, the app is going to make an explicit statement about which flavor API it is prepared to operate on.

At each juncture I would expect vendors to make an assessment of which apps would be impacted if support for a given revision of OpenGL was dropped from the drivers, and decide accordingly.

OpenGL applications still don't need a total rewrite every year, because drivers need no longer present a "one size fits all" API. A new driver could ship with a "3.1" or "3.2" path available, and still be able to run 3.0 apps if the vendor chooses to provide that support. By the same token, as apps migrate from 2.x to 3.x, it provides a signal as to how long 2.x support must persist.

Korval
08-11-2008, 11:04 PM
On 3.0 and later, the app is going to make an explicit statement about which flavor API it is prepared to operate on.

Except that there's a big problem with that.

Longs Peak was initially designed for R300/NV30 quality hardware as its baseline hardware. That is, the minimum a Longs Peak implementation would support is that level of hardware.

GL "3.0" lacks a lot of D3D 10 features, but it has just enough of them that GL "3.0" proper can only be implemented on D3D 10 hardware. So, if you don't need stuff like integer textures (I'd have preferred uniform buffers) and the like, if your rendering path could have run on DX9 hardware, you can't advance. You can't use post-GL 3.0 API features on hardware that doesn't need "3.0" hardware features.

LP was a clean break, where form and functionality were properly separated. "3.0" is not.


which should be removed in OpenGL 3.1


with a bit of luck - will have their own driver with all the optimizations

One good reason. That's all I ask. One good reason why any reasonable or sane person would trust the ARB after the epic failure of what they've done here.

dor00
08-11-2008, 11:04 PM
c) go beyond 3.0 as needed - eliminate usage of deprecated features in order to comply with future releases.


Future releases? Aka... after another 2 years?

Leadwerks
08-11-2008, 11:08 PM
That's good, because I don't think there was enough work before this for the driver teams. They have done such a flawless job of supporting OpenGL 2.1, and I think it is time to give them a real challenge. Supporting three versions of OpenGL will surely keep them entertained.

When did you say OpenGL 3.1 was due out? Another year? I suppose there will be the obligatory year of silence following that, so I look forward to 2010, when we get to see all the new extensions added to OpenGL 3.1!

I have already downloaded the DX10 SDK and begun learning it. I will not gamble the future of my company on OpenGL.

Rob Barris
08-11-2008, 11:21 PM
Korval, thanks for the reminder - for many sections of GL 3.0 functionality that can be implemented on 2.x - the improved FBO and VBO capabilities for example - new ARB extensions have been specified against 2.x. You could consider adoption of those extensions to represent a variation on "a" above. Those extensions are in the registry now.

It's correct that OpenGL 3.0 core specification targets the most recent generations of GPU's: roughly speaking the NV G80 and AMD R600 generations and beyond. This hardware floor provided a motivation to provide an extension path for 2.x as well, for specifically those classes of application that want to continue supporting older hardware.

dorbie
08-11-2008, 11:27 PM
In terms of simplification this is the worst of all worlds for an implementor, and it extends the pain indefinitely. It tries to be developer centric but in an unhealthy way that just never cleans up the landscape.

Chosing between a clean new API and an old one based on a context creation is nice and simple. The right way to do this is to say "See OpenGL 2.x spec". To offer a matrix of options that overlaps is fugly and makes the support burden for implementors even higher. But the most worrying thing here is you just rattled off 3.1 & 3.2 revs without any suggestion that this really needs to get sanitized as part of a clean break. Leaving deprecation up to vendors in a knife fight turns deprecation into a game of chicken. It's so obviously flawed I guess you have to be in the middle of it to miss the boat anchor fetish.

I can see the motivation here but having an excuse is not the same as having a sound justification, the future cannot be about serving the needs of the slowest moving elements of the developer community and pissing off the rest. Dinosaurs who can't port their apps do not deserve the right to hold back the future. They can be serviced with OpenGL 2, accomodating that model in OpenGL 3 as a bridge to the future, well, what does it really accomplish? Help them feel better about riding the short bus?

If you wanted to provide this you could have given them semi-supported developer drivers for their port (not for general release, EVER), it didn't need to go in the spec and be rolled out on everyone's machine.

Rob Barris
08-11-2008, 11:31 PM
Yes, but from the spec you have to discern the intent before you know what the hell is going on. Which has led to this political disaster. You need to put a stake through the heart of that 2.x cruft by releasing a 3.1 spec ASAP that makes it clear this stuff is gone. Not *might* be gone in some future release, but end of life, pining for the fjords. Start potring or die. GONE in the next revision.

From aesthetic and educational purposes alone it's justified.

A very high priority goal was not to remove any function without clearly communicating to the developer base that said function was on its way out, and this goal led to the development of the deprecation model.

So now we have a spec that clearly marks what's deprecated, and we also have a facility for developers that want to start work on updating their app for the next revision, to do so under a GL 3.0 driver - by requesting a forward compatible context at runtime. i.e. you can use a GL 3.0 driver to simulate running under a 3.1 driver where the deprecations have actually taken effect. Depending on implementation, some drivers may run faster in this mode due to reduced state tracking overhead.

Korval
08-11-2008, 11:36 PM
Those extensions are in the registry now.

Forgetting for the moment the fact that you completely missed the point, consider this.

ATi's policy on OpenGL at this point is to ignore any and all extensions that they don't already support. They had no intention of supporting the D3D 10 extensions; they will only support core features.

So reliance on extensions is like relying on the ARB: never a good idea.

However, you missed the main point, which was that there are (potentially if not in fact) advantages to using the "deprecated functions don't exist" version of OpenGL "3.0". Possible performance optimizations and so forth. However, I can't use that unless I'm already using a GL "3.0" context, which requires more hardware than I would like to support.

dor00
08-11-2008, 11:44 PM
This is just kind of insane. Talking about MS vs. the ARB is a joke, since the ARB just commited suicide. The most insulting thing possible is an ARB member asking for future input, after outright killing OpenGL. Then again, if you're going to run an organization with a small amount of cutting edge members, and a huge amount of members who are based on nothing ever improving, what can you expect.

"Hay, want to keep supporting a 7 year old piece of trash?? ?? What? No? I don't understaaaaaaand!"

Seriously, it's impossible to see this as anything other than a joke. OGL v2 was the first complete failure that caused most people to flee. But this is something different...whole platforms are being abandoned.

I don't know why there's even any discussion about NVidia, or ATI, or Intel, or any other company related to graphics. The ARB's core audience are now entirely GL 1.x-related CAD companies.

Several platforms use 'GL-like' interfaces, rather than anything actually connected to GL, and we now know why.

Any project with a 'Khronos' member involved should be abandoned immediately. There is no future, obviously. It seems really difficult for Khronos members to understand it, but they've ended OpenGL.

OpenGL is dead. Khronos killed it.

Now i get it. Microsoft dont need to shi t their pants for the next years about linux gaming. That sounds like a plan.

Lets compare now(minimal as much as my brain can do):

DirectX 10.1 SDK:
- works with latest hardware
- lots of documentation
- lots of examples
- lots of tools (even texture format/model/shaders)
- updated regularly

OpenGL 3.0:
- 2 new pdf documents(which remain to be implemented in next vendors drivers, haha.. cant wait the new bugs and poor implementations of those)
- new website with Khronos/ARB picture.

Which one is more attractive?

Btw, what will gonna be posted on http://www.opengl3.org web site? More docs? Khronos/ARB pictures from siggraph? Hehe... we need more pics!!! No wait, we need more extensions, more extensions.. must be shi t load of fun to write those papers dont needing to worry about examples/implementation/docs and so on.. give us more extensions haha...

With out doubt, Khronos/ARB are just losing the control.. yeah, exactly, they have no "master" to push them, losing direction is very easy..

RIP OpenGL 11-13/08/2008

dorbie
08-11-2008, 11:57 PM
As I said in a subsequent post, deprecation seems to be a game of chicken now and that's a bad thing. Even ignoring that the driver support matrix is complicated by this. It might even do a good job of guaranteeing the longevity of cruft while complicating driver development and test, at least for now.

Timothy Farrar
08-12-2008, 12:03 AM
With the seemingly bias towards negative comments here, I think it would be wise to actually take a look at what GL3 has now as core in the spec and place all this in perspective. Nearly all very useful DX10 level functionality is there, which brings GL3 to a near up-to-date API for current hardware functionality.

Pending driver support, which I'm sure will soon follow by all the major vendors, this will enable a developer to actually use current hardware features on all platforms as well as hopefully get cross-platform DX10 level support on Windows XP (assuming ATI/Intel releases XP GL3 drivers). If this is the case, GL3 actually will enable a considerable market advantage over a DX10 only title because Vista still has little market share.

IMO, GL3 is an excellent step forward!

dorbie
08-12-2008, 12:12 AM
It's not all bad, with the deprecated stuff out it's very clean and ES like as I said in my first post, but that stuff's still in there. As for the missing stuff there's no point in pouring salt in the wounds.

Korval
08-12-2008, 12:13 AM
Nearly all very useful DX10 level functionality is there

That's a lie and you know it. Geometry shaders and uniform buffers are both missing. I haven't done a side-by-side feature analysis, but both of those are important D3D 10 bits of functionality.

dorbie
08-12-2008, 12:22 AM
To be fair you need to visit his web site, he has a nice summary that's not as gloom and doom as your assessment.

http://www.farrarfocus.com/atom/

I hope they clean up this "deprecated" roadmap.

Simon Arbon
08-12-2008, 12:45 AM
So now we have a spec that clearly marks what's deprecated, and we also have a facility for developers that want to start work on updating their app for the next revision, to do so under a GL 3.0 driver - by requesting a forward compatible context at runtime.
This whole "Depreciation" nonsense and having two different versions of OpenGL 3.0 at the same time seems to be causing a lot of confusion here, so lets just look at it from a different angle.
The new drivers will export 3 different types of context:
1/ A 2.1 Context with extensions to add new functionality.
2/ A 3.0_Full Context that is basically the same thing as the above but with the new extensions promoted to core.
3/ A 3.0_Forward_Compatible Context which is the "Real" version 3.0 with the legacy stuff removed.

A lot of confusion (and yelling) would have been prevented if 3.0_Full had just been called version 2.2 (Which is what it is, 2.1 with some promoted extensions).
The "Real" 3.0 could then have stood on its own as the new future API with all the cruft removed.

If the hardware vendors create a 3.0_Forward_Compatible DLL that only impliments the performance API then this is all that needs to be loaded when an application requests this context.
A second DLL would then impliment the 2.1/2.2 legacy API on top of the performance API, only being loaded if the application requests either of these contexts.

In the future these can then be implimented as the seperate "profiles" in future versions of OpenGL.
ie. the OpenGL 3.1_Legacy profile for compatability with obsolete programs, and the OpenGL 3.1_Performance profile for everybody else.

Groovounet
08-12-2008, 01:01 AM
Barthold, thanks for the response too

Maybe, it's the idea of stopping communication with the community that make us so angry... I guest that the ARB wasn't expecting a better reaction from the community?

Korval
08-12-2008, 01:05 AM
3/ A 3.0_Forward_Compatible Context which is the "Real" version 3.0 with the legacy stuff removed.

No, sorry. The real 3.0 (aka Longs Peak) was more than just removing cruft. It was a different API that removed and changed the fundamentals of how things work. Tossing out the old API was just step 1 for LP. The new object model, the way object creation was atomic and inherently threaded, the way object creation was success-or-fail (taking away the guessing game that is OpenGL), etc. All that stuff is gone and will never see the light of day.


To be fair you need to visit his web site, he has a nice summary that's not as gloom and doom as your assessment.

As I pointed out in my rebuttal to Barthold, features aren't what GL 3.0 was supposed to be about. It's like if someone promised you a car, but got you a boat instead. Yeah, a boat is kinda nice and all, but I can't drive on land with it.

dorbie
08-12-2008, 01:08 AM
The new drivers will export 3 different types of context:
1/ A 2.1 Context with extensions to add new functionality.
2/ A 3.0_Full Context that is basically the same thing as the above but with the new extensions promoted to core.
3/ A 3.0_Forward_Compatible Context which is the "Real" version 3.0 with the legacy stuff removed.

A lot of confusion (and yelling) would have been prevented if 3.0_Full had just been called version 2.2 (Which is what it is, 2.1 with some promoted extensions).
The "Real" 3.0 could then have stood on its own as the new future API with all the cruft removed.


Nobody is yelling. This was already outlined by Rob Barris, it is not confusing. The problem is it is not a simplifying plan, quite the opposite.

3.0 with legacy is not 2.2, in those terms it's a superset (although not exactly) and a key priority should have been to avoid this. 3.0 forward compatible will coexist. Going forward when 3.1 arrives we're told that whether or not the deprecated functionality is actually unsupported or slow is entirely up to the vendors. There is no 2.x and 3.x path which would have been a damned sight cleaner that the proposed plan. There will be the superset and the forwards compatible version with the superset having some TBD support for legacy (in competition with other vendors).

You don't move away from the burden of legacy support by doing something that significantly complicates the driver situation offering a matrix. You'd be forgiven for seeing it as the worst of both worlds.

It's clear this is an attempt to be very developer centric, but well, it is what it is.

Simon Arbon
08-12-2008, 01:16 AM
DEPRECATED FEATURES OF OPENGL 3.0:
Unified extension string - EXTENSIONS target to GetString (section 6.1.11).
Oh great, i just get my application startup to under half a second and you decide to make extension string processing SLOWER.

I know a lot of people seem to have trouble reading a PCHAR, but then the spec itself doesn't help:

Applications making copies of these static strings should never use a fixed-length buffer, because the strings may grow unpredictably between releases, resulting in buffer overflow when copying.
This is particularly true of the EXTENSIONS string, which has become extremely long in some GL implementationsWRONG! people.
There is absolutely no reason at all to make an identical copy of a string you already have, it should simply be scanned from beginnning to end while checking the extension names. This footnote is supposed to be a warning but instead is telling people its OK to do the wrong thing.

There have been many posts to the suggestions forum suggesting better ways to detect extensions such as providing those with official extension numbers as a set/bitmask, or providing an array of pointers to the start of each one.
But the *GetStringi( enum name, uint index ); option should only be there to help people with limited programming skills, those who know what they are doing should still have access to the whole string.

Actually there is one way that Extensions should be improved, the application should be able to say which version of OpenGL it was written for and ask for only those extensions that are not core in that version.

ector
08-12-2008, 01:23 AM
Yes, but from the spec you have to discern the intent before you know what the hell is going on. Which has led to this political disaster. You need to put a stake through the heart of that 2.x cruft by releasing a 3.1 spec ASAP that makes it clear this stuff is gone. Not *might* be gone in some future release, but end of life, pining for the fjords. Start potring or die. GONE in the next revision.

From aesthetic and educational purposes alone it's justified.

A very high priority goal was not to remove any function without clearly communicating to the developer base that said function was on its way out, and this goal led to the development of the deprecation model.

So now we have a spec that clearly marks what's deprecated, and we also have a facility for developers that want to start work on updating their app for the next revision, to do so under a GL 3.0 driver - by requesting a forward compatible context at runtime. i.e. you can use a GL 3.0 driver to simulate running under a 3.1 driver where the deprecations have actually taken effect. Depending on implementation, some drivers may run faster in this mode due to reduced state tracking overhead.


That's all well and good, but I really think one thing should be done: The spec spends a lot of words talking about functionality that is then deprecated in an appendix. I think EACH AND EVERY deprecated function should have a big red DEPRECATED stamp next to its description. And you should also release a second PDF that simply does not contain the deprecated functionality.

Chris Lux
08-12-2008, 01:25 AM
The very point of the forward compatible context mechanism in 3.0 is to allow the developer to operate their app in a mode where deprecated functions are disabled (for use as a porting tool).

So at a point in time where GL 3.0 drivers are available, if you are thinking of updating your app for the anticipated post-3.0 release where deprecated functions are actually removed - you can start that work using 3.0 and see it run.

Some implementations may actually realize performance benefits in that mode since elimination of legacy functions can also lead to elimination of state tracking for those functions.


i miss the point not completly. BUT one point of OpenGL 3.0 was to make the development of drivers much easier. now with several profiles to worry about how in hell is this easier on the driver developers? you _have_ to support the legacy profiles and do want the performance enhancements possible with the reduced profile. they to interact and they surely do not want to maintain x driver profile code branches.

so tell us how can this improve overall driver quality?

dor00
08-12-2008, 01:36 AM
Sounds like a new total chaos era..

Simon Arbon
08-12-2008, 01:39 AM
No, sorry. The real 3.0 (aka Longs Peak) was more than just removing cruft. It was a different API that removed and changed the fundamentals of how things work. Tossing out the old API was just step 1 for LP. The new object model, the way object creation was atomic and inherently threaded, the way object creation was success-or-fail (taking away the guessing game that is OpenGL), etc. All that stuff is gone and will never see the light of day.
I agree that a lot of what we were promised is missing and i am very disapointed, but i am commited to a Windows/Linux/Mac application so i have no choice but to make do.
My previous job was as a quality control engineer and if anyone had suggested this "Depreciate and evolve a bit at a time" plan to me i would have had them sacked, its only asking for trouble.


There is no 2.x and 3.x path which would have been a damned sight cleaner that the proposed plan.
You don't move away from the burden of legacy support by doing something that significantly complicates the driver situation offering a matrix. You'd be forgiven for seeing it as the worst of both worlds.
But who in their right mind is going to use 3.0_Full anyway, you would have to be crazy.
The 2.x support needs to stay so old applications still work, but if most people start using 3.0_Forward_Compatible immediately then the vendors can concentrate their efforts on getting this working well and nobody will care if 3.0_Full is full of bugs as no-one will be using it.

dorbie
08-12-2008, 01:51 AM
There is no 2.x and 3.x path which would have been a damned sight cleaner that the proposed plan.
You don't move away from the burden of legacy support by doing something that significantly complicates the driver situation offering a matrix. You'd be forgiven for seeing it as the worst of both worlds.
But who in their right mind is going to use 3.0_Full anyway, you would have to be crazy.
The 2.x support needs to stay so old applications still work, but if most people start using 3.0_Forward_Compatible immediately then the vendors can concentrate their efforts on getting this working well and nobody will care if 3.0_Full is full of bugs as no-one will be using it.


You've just said this spec is 3 times the size it needs to be and legacy support with all its burdens is now essential just to support crazy people.

You go on to outline a nightmare that can be avoided just have 2.x and 3.x forward, no superset. It's easier, less buggy and gets everyone where they either want to be or deserve to be faster.

Zengar
08-12-2008, 02:15 AM
The problem is, we all expected and were looking forward not to new version of OpenGL, but to a new API. Instead, ARB sadly desided to be "politically correct" and make the transition slowly. Now, we all know that this kind of stuff does not work.

There are several people who pointed out that writing a new driver for a LP model would be too much work. I have to disagree. The most complicated parts - GLSL - is already there in current drivers and the rest could be done quickly by writing new interface functions for the driver core; as LP was sopposed to do the same thing as GL 2.1 but simpler. Deprecation model? well, Dorbie and others are right... we need a "new features only spec" ASAP!

Of course, the GL3 spec is not so bad, if we look at it as a 2.2 revision. Lots of interesting things are in the core now. I don't really miss the geometry shaders, I think the desision to keep them as an extension was right. The more ES-similar shading language is also a very good idea.

But this spec does not adress one of the most important problems of GL: bad driver support. It does nothing to ease the life of the driver developers, as LP should have done. Result: Nvidia will have new drivers soon, ATI won't, Intel... well, who cars about them anyway :p While GL3 supports new features, there is nothign in it that would make it attractive for the developer or the IHV. Therefore, the stagnation just continues. GL is more and more becoming a dead standard, a bloated mass of spec with no particular sence. It still includes EVIL features like selection mode despite driver vendors having stopped to care about it long time ago, only because some CAD guys with total inability to write good code (who don't need the new features anyway and could have stayed with 2.x). You can't please everyone, either you do the things right (and then you MUST break the backward compatibility), or you stay with the old API model (=bad drivers, guesswork, inconsistencies etc.).

Finaly, I would liek to repeat my point: it was never about the new features, guys, it was the new API that was needed and waited for! This is the reason why we are so upset...

dor00
08-12-2008, 02:18 AM
http://www.opengl.org/registry/specs/EXT/direct_state_access.txt

interesting how AMD is missing from contributors

Mars_999
08-12-2008, 02:33 AM
Yeah ATI is missing from GL a lot lately, and with this new GL3.0 spec I want to know when ATI is going to get me a driver so my code will work, as of now Nvidia hardware is all I can run my code on.

CrazyButcher
08-12-2008, 02:41 AM
is there any chance someone takes up on the job of making a "deprecated-removed" api + that "direct access" thing as spec?

cause those two, well presented, would have been what people sorta hoped to get, and its not like nothing happened. It's just presented poorly. Aside from the fact that it was expected to be "core", but well compromises will always happen. Let's try to present the good things better.

a spec with "red" or greyed out, parts of all to be deprecated stuff, would be a clear win. And feel less bloated, better for developers to see what to use and what not.

Mars_999
08-12-2008, 02:46 AM
Screw it I am in this for the long haul, I am sticking with GL and if it dies, maybe Larrabee will be the next best thing and I can jump ship to that...

How long will my GL2.1+ code with Extensions like texture arrays work with the new GL3.0 drivers?

WHEN WILL ATI have there drivers for GL3.0 out? I need INFO!!!

Simon Arbon
08-12-2008, 02:47 AM
4)... A profile encapsulates functionality needed to meet the needs of a particular market. Conformant OpenGL products may implement one or more profiles. A profile is by definition a subset of the whole core specification. The core OpenGL specification will contain all functionality, including what is in a profile, in a coherently designed whole. Profiles simply enable products for certain markets to not ship functionality that is not relevant to those markets in a well defined way.This bit is really worrying me as it has not been explained properly.
Does this mean that if i write a CAD program that uses features normally used in games, that in 2 years i will suddenly get a hundred service calls from people with Quadro's that have just downloaded a new driver that no longer supports my profile?

What i would really like is for someone to simply confirm that everything in 3.0_Forward_Compatible is going to be included in EVERY profile, ie. it is the core feature set!


by releasing a 3.1 spec ASAP that makes it clear this stuff is gone
should also release a second PDF that simply does not contain the deprecated functionality
we need a "new features only spec" ASAP! I certainly agree with this, it is horribly confusing trying to read a spec thats actually several overlapping specs.
This may be useful for driver writers but the developers need a separate document that describes 3.0_Forward_Compatible and is not cluttered-up with all the old stuff.


You've just said this spec is 3 times the size it needs to be and legacy support with all its burdens is now essential just to support crazy peopleYep, thats what i said.
Although they probably call themselves "Legacy application developers"

You go on to outline a nightmare that can be avoided just have 2.x and 3.x forward, no superset. It's easier, less buggy and gets everyone where they either want to be or deserve to be faster.I am hoping that this is what they intend to do with the profiles when they introduce them, 2.x could be called the "Ludite" profile and 3.x could be the "performance" profile.

Nighthawk
08-12-2008, 03:12 AM
It would habe been more honest to name this spec 2.2 and leave the name 3.0 to a future cleaned-up version without the deprecated features - that was what most people were expecting.

I don't care too much about the syntax(objects and extension), but the important features must perform fast and bug free - which is difficult to achieve when the driver is littered with legacy code.

On a sidenote, when is ATI going to expose the geometry shader in OpenGL?

zeoverlord
08-12-2008, 03:41 AM
If a lot of old legacy features where deprecated but not removed until 3.1 then why call it 3.0? why not call it 2.9 or something like that and then have a clean"er" break with 3.0 which would also then introduce all the SM4 stuff.


Darn:beaten to it.

Simon Arbon
08-12-2008, 03:43 AM
With the ARB basically saying that they are going to go back into hibernation for another year before releasing 3.1,
it is going to be up to ATI and NVIDIA to bring OpenGL up to date.
They are the only hope we have left.

So how about it mighty hardware vendors, when can we have extensions for the fabled ATI tesselator or Scattered writes?

Here's a quick list to include in your new 3.0 drivers:
-Geometry shader
-Tesselator
-Blend shader
-Seperate pre and post Z-discard shaders
-Post-rasterisation instancing (for multipass deferred shaders)
-Updatable resolution LOD textures
-ByteCode GLSL
-Precompiled binary save/restore
-Multi-threaded background buffer loading
-A way to query if wanted functionality is hardware supported or emulated in software

pjmlp
08-12-2008, 04:21 AM
I am also still wondering if I should not switch to DX now that OpenGL 3 is another step in the direction of Glide.

I really miss all the DX support on the OpenGL side. Just look for the support given by the hardware makers. I think that the only reason ATI and NVidida still care about OpenGL is due to the CAD companies.

Just go to their developer sites and look for the amount of tools/documentation that they are providing for OpenGL and for DX.

Even NVidia that supposedly is giving the best OpenGL support, it still does not support GLSL on FXComposer!

Nah, for the time being I'll keep using OpenGL for my hobby projects but I am already planning to learn DX.

dor00
08-12-2008, 04:32 AM
Thats totally true.

Even if we have 3.0, there is no support at all, not talking about support tools like DX got. "SDK" is completly missing.

Red_dragon
08-12-2008, 04:35 AM
@People who are looking for a "new feature only spec" there is a spec that highlights the new parts [in magenta though :(( ] and strikes out the deprecated parts.

Now to the subject :
Well, this certainly proves that Khoronos did not have the capacity to be entrusted with upgrading the OpenGL in the first place.

What was supposed to be the main focus of OpenGL 3.0, was the new form, so every developer having a product for a specific purpose and hardware, would be able to port code to it, and maintain the code much more easily.

But OpenGL 3.0 is utter fail in having an identity and a purpose to serve.
Is it an all-encompassing standard ? Fail, it needs DX10 class hardware to support it fully.
Performance that rivals DX ? Fail, it does not really represent the internal hardware and tuning your program to have good performance needs extensive knowledge of the underlying hardware, it probably needs a book of it's own to have a good performance.
Is it an intermediate standard for something better ? Double Fail:
1 - From Jan 2008, the new direction should have been announced, asking developers for feedback and help [along with an apology for what you bragged about at OGL BOF 2007 and what you were to deliver], not by doing it like this and leaving OpenGL developers with a very strong feeling of resentment.
2 - Every new OpenGL standard that is to be half-useful, SHOULD break code compatibility, vendors that are too lazy to update their OGL code should rely on an older version, be it OGL 2.1 + Extensions, be it "Non deprecated OGL 3.0"

Does it really bring something with "Direct Access" extensions : Most probably failure, people should BOTHER to make Direct Access code for it, less developers, less feedback as well as AMD and Intel's reluctance on making competitive OpenGL drivers [reading the first lines, it only mentions nVidia bothering about implementation and contributions as a major GPU vendor]

Does it succeed in being taken seriously ? FAIL, and that probably could have been the only thing that would warranty serious support and enthusiasm from the developer community.

The put it bluntly, the only people who are really happy, are the one's that have tons of existing legacy code, what I want answered by the ARB people here is that they feared to branch the code, and they failed to confront CAD companies, why should I expect anything different in the upcoming year ? How are you going to muster the courage for that ? How am I going to believe that you actually CAN remove deprecated features ??

knackered
08-12-2008, 04:57 AM
OpenGL extension specifications aren't generally contenders for the designation "funniest thing I've seen all day."
I'm clearly going to have to re-read it again; I must have missed the funny part.
Then read it again, and put it in the context of someone who assumed we were moving to the object model, rather than someone who knows it was a compromise reached after weeks of meetings with CAD companies.
Believe me, it's funny.

Eckos
08-12-2008, 04:59 AM
Does the ARB know that with the garbage they pulled will literally kill the OpenGL community? Like seriously, every forum, I've been to that does graphics, like GameDev, Here, and others. Alot of people have started learning direct3d now. It sucks that the ARB doesn't give the community what they asked for and wanted. Can't compete with Direct3D anymore, no nothing. They basically just shot off any chance of rivaling Direct3D to bare nothing.

Even if they release the stuff, how many more years are we going to be stuck with stupid 1.x/2.x crap?

bobvodka
08-12-2008, 05:23 AM
Just as a point I have it on good authority that this change wasn't down to CAD companies. As to what happened inside the ARB to bring about the current situation, well that I don't know, but I can state that it wasn't the CAD guys who caused it.

Trenki
08-12-2008, 05:33 AM
I'm really disappointed of the work of the ARB. I was hoping for a clean start with a new API. But hey, OpenGL can still be extended with extensions. We should develop a new extension GL_EXT_teh_real_ogl3 as a community effort :)

knackered
08-12-2008, 05:33 AM
well I can honestly believe that. I've had the pleasure of working with the source code of a few big name CAD/engineering apps over the years, and they all had the rendering code quite well abstracted. This is to be expected, because they go back a long way before OpenGL, like SGI's original GL etc.
The new object model would have simply meant a few new source files put into the makefile, and a smattering of #ifdef's.
But the ARB needed a scapegoat - and to some extent so do we.
Having said that, I did read somewhere that AutoCAD still uses selection mode, which suggests GL is heavily embedded in their source.

CrazyButcher
08-12-2008, 05:40 AM
@People who are looking for a "new feature only spec" there is a spec that highlights the new parts [in magenta though :(( ] and strikes out the deprecated parts.

if you refer to
http://www.opengl.org/registry/doc/glspec30.20080811.withchanges.pdf
either I am blind, but nothing is is stroke out for me, just the new stuff in magenta.

Red_dragon
08-12-2008, 05:44 AM
The GLSL specs have the strike out, but the GL specs don't.

dor00
08-12-2008, 05:52 AM
I am completely disappointed.

In 2 years was plenty of time to write a completely new API with docs and examples and everything, considering the amount of ARB peoples involved.

Honestly, is that what you show at siggraph?? It cant be true...

Yeah, OpenGL looks more like dead now. There are loads of reasons to consider that. And yeah, I compare it with DX.

Sqewie
08-12-2008, 05:53 AM
OpenGL 3.0?
You mean 2.2, that's more like it. Believe me if they are heading this way with extending the extended extensions, they will be killed by legacy.

Overmind
08-12-2008, 06:00 AM
Has anyone noticed that only fixed function *vertex* processing is deprecated? What about fragment processing??

Kazade
08-12-2008, 06:10 AM
Just a quick question. Seeing as ALL the matrix stuff is deprecated (glMatrixMode, glTranslate, etc.) does that mean we need to keep track of the matrices (modelview, texture etc.) ourselves and pass it to the vertex shaders?

Chris Lux
08-12-2008, 06:13 AM
Has anyone noticed that only fixed function *vertex* processing is deprecated? What about fragment processing??
i would call it half a**ed attempt. the more i read the new spec the less i can understand it.

what in hell prevented them from doing a fully new api as GL3 profile? even it it would take versions up to 3.5 to get it round and clean. why not try this way?

the profiles are nice, they can be used in the future to distinct hardware tech level (like D3D is doing ;)). but in this form they are a waste.

i think this is just a marketing game to say GL3 is the equivalent to D3D10, which it is clearly NOT (spell geometry shaders for example).

as nice as the new direct state access extension is. it will take ages to get it into shape and the ages to get the then depricated bind-to-change code out of the code bases (with again resistance of some software vendors to change their bases).

all in all just a waste of time and good effort into the long peaks api.

Chris Lux
08-12-2008, 06:15 AM
Just a quick question. Seeing as ALL the matrix stuff is deprecated (glMatrixMode, glTranslate, etc.) does that mean we need to keep track of the matrices (modelview, texture etc.) ourselves and pass it to the vertex shaders?
yes

skynet
08-12-2008, 06:19 AM
Yes, and its going to be a mess since the promised "program environment uniforms" are also missing. You have to upload each matrix into _all_ the shaders that use it every time it changes. Same is true for glPushAttrib/glPopAttrib. They ripped it out without providing a proper replacement mechanism (such as state objects).

k_szczech
08-12-2008, 06:27 AM
OpenGL 3.0?
You mean 2.2, that's more like it.
OpenGL 2.2?
You mean 1.8. We've been through "major version" once allready.

OpenGL 3.0 was exactly what I'm looking for. It was like a dream.

Today I woke up and the World looks grey again.

dukey
08-12-2008, 07:58 AM
I've been working with opengl for a few years now.
I think the 2.1 api is basically solid. I would much prefer to see an opengl ES approach to opengl 3.0. Ie strip out the parts of the API which suck. Immediate mode for one, since it is nearly impossible to optimise for. Plus i've seen so many CAD programs try to push 500k vertices per frame, and wonder why performance is so poor. Display lists are another, again these must be a nightmare to maintain at the driver level. Other bizare stuff such as glRectf ?? That really has no place in the API.

I'd also like to see frame buffer objects standardised rather than just be extensions. Plus, the ability to do multisampling with offscreen render targets. The current extensions to do this seem to be somewhat of a mess.

It would also be a novelty to have updated header files. The header files for windows are now 12 years old ? I mean, wtf.

Rob Barris
08-12-2008, 08:15 AM
Dukey, FBO is a core feature in GL 3.0 (including multisampling and blit).

ScottManDeath
08-12-2008, 08:22 AM
And what about multisample textures and signed fixed point texture formats? Cubemap arrays? Per color attachment blend modes?

Rob Barris
08-12-2008, 08:35 AM
Has anyone noticed that only fixed function *vertex* processing is deprecated? What about fragment processing??

Both FF vertex and pixel processing are deprecated in OpenGL 3.0, it's in appendix E (a few paragraphs down from the reference to FF vert processing).

NeARAZ
08-12-2008, 08:39 AM
Whoa, GL3 finally sees the light of the day. Whoa-whoa, the response to that was... well... "interesting".

So just to recap, what I was expecting from GL3 (link (http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Main=45784&Number=239105#Post239105), link (http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Main=45784&Number=241619#Post241619), link (http://aras-p.info/blog/2007/11/08/what-opengl-actually-needs/)):

1. Major cleanup of the runtime, resulting in simpler drivers, hence more stability and possibly even performance. Fail.

2. Make GLSL actually useable. Precompiled binary shaders, one offline compiler that does basic optimizations. Fail.

Without those two, OpenGL/GLSL is still quite unusable in the real world. Yeah, promoting some extensions to core is a nice gimmick (driver support is still an open question), thinking about "some quite possible deprecations that maybe perhaps someday we'll most likely do" is sure nice, yeah. And spectacular handling of public relations of the whole thing.

"Keep up the good work", what else can I say. Me back to fighting driver bugs.

Brolingstanz
08-12-2008, 09:36 AM
Simon that strikes me as a way to mitigate the profusion of errors going forward.

The glStringi API and the new tokens for retrieving the major an minor version numbers together I think'll make it harder to do the wrong thing and play into the deprecation hand quite nicely.

knackered
08-12-2008, 09:39 AM
is nobody going to tell us why the community weren't informed of this change in direction in january? or february? or march? ...etc.

dor00
08-12-2008, 09:53 AM
is nobody going to tell us why the community weren't informed of this change in direction in january? or february? or march? ...etc.

Maybe because rest of us in the ARB members eyes, are just a bunch of idiots who dont need to interfere with big guys business?

The "big brothers" do whats best for us.. we dont need to disturb with our stupid ideas.

Timothy Farrar
08-12-2008, 09:58 AM
So how about it mighty hardware vendors, when can we have extensions for the fabled ATI tesselator or Scattered writes?

Here's a quick list to include in your new 3.0 drivers:
-Geometry shader
-Tesselator
-Blend shader
-Seperate pre and post Z-discard shaders
-Post-rasterisation instancing (for multipass deferred shaders)
-Updatable resolution LOD textures
-ByteCode GLSL
-Precompiled binary save/restore
-Multi-threaded background buffer loading
-A way to query if wanted functionality is hardware supported or emulated in software


"Scattered writes" - OpenCL will be covering this I'm sure. Cross platform GPGPU is not a fully solved problem yet, even DX11 prototypes only are seeing a 2x performance improvement (out of say the 4x or 8x which is possible in theory) with this type of "compute shader" functionality (which is what scattered writes are for). So don't expect this to be figured out yet, and thus isn't core in GL or anywhere else.

"Geometry shader" - Is supported in extensions by Apple and NVidia. However wasn't the fast path on the hardware (in fact very slow). Other methods, such as Hystopyramids, showed much faster performance for similar functionality. Also NVidia's hardware only recently (200 series) made the updates to support better geometry shader performance. So really not as useful as you would think, and understandably out of core spec to make it easier for vendors to build GL3 drivers.

"Tesselator" - Lack of current cross platform hardware support, no reason to think about this now.

"Blend shader" - Lack of hardware support period. Not even in DX11 I believe.

"Seperate pre and post Z-discard shaders" - You can do this yourself with a branch in a shader. Or with stencil if you want a more hardware friendly path (due to 2x2 pixel quad packing into vectors for hardware).

"Post-rasterisation instancing (for multipass deferred shaders)" - What? I think you need to describe what you are looking for here, any why you cannot do this type of thing with current GL3 functionality.

"Updatable resolution LOD textures" - What do you mean here?

"ByteCode GLSL, Precompiled binary save/restore" - If you really get into the internals of both the order and (especially) newer hardware you will see that a common byte code for all vendors is a bad idea because hardware is way too different. All vendors would still have to re-compile and re-optimize into the native binary opcodes. So all you would be saving is parsing strings into tokens which really isn't much of a savings. Due to all the different hardware, shaders in the form of pre-compiled binaries really only makes sense in the form of caching on a local machine after compile, and perhaps might be something to request as a new feature.

"Multi-threaded background buffer loading" - You can map as many buffers as you want and fill them from whatever threads you want, so this is currently easy to do.

"A way to query if wanted functionality is hardware supported or emulated in software" - The GL3 standard provides a listing of required functionality (especially in texture formats) to the route forward for knowing what is supported seems rather clear. Now it is onto the vendors to create correct drivers.

Also cubemap arrays (different post) are not hardware supported on (many of the) NVidia cards?, so I wouldn't expect that to be core, but rather an extension if ATI sees that as worth while.

Brolingstanz
08-12-2008, 10:28 AM
Judging from the DSA spec, NV has at least a couple of contenders in the works (NV_explicit_multisample (?), NV_texture_cube_map_array).

ector
08-12-2008, 10:45 AM
[quote=Simon Arbon]
"ByteCode GLSL, Precompiled binary save/restore" - If you really get into the internals of both the order and (especially) newer hardware you will see that a common byte code for all vendors is a bad idea because hardware is way too different. All vendors would still have to re-compile and re-optimize into the native binary opcodes. So all you would be saving is parsing strings into tokens which really isn't much of a savings. Due to all the different hardware, shaders in the form of pre-compiled binaries really only makes sense in the form of caching on a local machine after compile, and perhaps might be something to request as a new feature.

You're forgetting the #1 advantage of the token approach: Updated drivers will not break your shader compiles. Currently, nVidia can fix a bug in its parsing that renders a previously parsable program illegal, or a program can be illegal on ATI but legal on nVidia, while using just basic features but slightly out of spec syntax. This problem, which is a major one, DOES SIMPLY NOT EXIST on D3D and is the major reason why D3D has you compile the shaders into tokens first.

Korval
08-12-2008, 10:49 AM
perhaps might be something to request as a new feature

Request as a new feature?

We have been requesting this for years!!!


The GL3 standard provides a listing of required functionality (especially in texture formats) to the route forward for knowing what is supported seems rather clear. Now it is onto the vendors to create correct drivers.

Longs Peak was going to give us real (implicit) checks. If you created a vertex array object with a certain format, you were assured that it would work in hardware. If the VAO failed to create, it wouldn't work in hardware.

GL "3.0" doesn't give us that. I want it back.

dorbie
08-12-2008, 11:26 AM
I've been working with opengl for a few years now.
I think the 2.1 api is basically solid. I would much prefer to see an opengl ES approach to opengl 3.0. Ie strip out the parts of the API which suck. Immediate mode for one, since it is nearly impossible to optimise for. Plus i've seen so many CAD programs try to push 500k vertices per frame, and wonder why performance is so poor. Display lists are another, again these must be a nightmare to maintain at the driver level. Other bizare stuff such as glRectf ?? That really has no place in the API.

I'd also like to see frame buffer objects standardised rather than just be extensions. Plus, the ability to do multisampling with offscreen render targets. The current extensions to do this seem to be somewhat of a mess.

It would also be a novelty to have updated header files. The header files for windows are now 12 years old ? I mean, wtf.

Redact the "deprecated" stuff from the 3.0 spec and you have everything you asked for.

Lord crc
08-12-2008, 12:08 PM
Trying to find something good about all of this, at least I have no more reasons <i>not</i> to buy that 4870x2 now...

I haven't had time to read it all yet, but it seems that what was delivered as OpenGL 3.0 now, would have been nice had it been released as OpenGL 2.0.

Mars_999
08-12-2008, 12:19 PM
Can someone explain to me a few items here,

1. How do I get into this driver mode? Do you need to call a new version of wglCreateContext()? And for me I use SDL so I am going to assume that SDL will need to be updated to allow this?

3/ A 3.0_Forward_Compatible Context which is the "Real" version 3.0 with the legacy stuff removed.

2. What are you all talking about when glMatrixMode() and what about using glTranslatef() glRotatef() ect... do we need to keep track of matrices ourselves now? Like DX?

LogicalError
08-12-2008, 12:24 PM
I haven't had time to read it all yet, but it seems that what was delivered as OpenGL 3.0 now, would have been nice had it been released as OpenGL 2.0.

Considering that OpenGL 2.0 was actually more like OpenGL 1.7, shouldn't OpenGL 3.0 be more something like OpenGL 1.8?

dor00
08-12-2008, 12:42 PM
I am losing time watching that opengl3 topics.
I will stop here.
RIP OpenGL, thanks Khronos/ARB who made it possible.

Timothy Farrar
08-12-2008, 12:45 PM
Can someone explain to me a few items here,

2. What are you all talking about when glMatrixMode() and what about using glTranslatef() glRotatef() ect... do we need to keep track of matrices ourselves now? Like DX?

Looks like most (if not all) fixed function vertex stuff is going away in the future including glMatrixMode(), etc. Best to read the spec, on page 404 (section E.1. PROFILES AND DEPRECIATED FEATURES OF OPENGL 3.0). So if you want to do matrix math you do it application side outside GL. Then either pass the matrix in as a uniform, fetch it via vertex texture fetch, or have it sent to you via vertex buffer (ie if you had one matrix per vertex).

As for profiles, I think you may find this useful,
http://www.opengl.org/registry/specs/ARB/wgl_create_context.txt

HGLRC wglCreateContextAttribsARB(HDC hDC, HGLRC hshareContext, const int *attribList);

"If the WGL_CONTEXT_FORWARD_COMPATIBLE_BIT_ARB is set in WGL_CONTEXT_FLAGS_ARB, then a <forward-compatible> context will be created. Forward-compatible contexts are defined only for OpenGL versions 3.0 and later."

linenoise
08-12-2008, 12:49 PM
I would like to preface this post with "I know nothing."

It seems like the ARB is not doing a good job. How difficult would it be to fork like the xorg folks did when the oversight organization lost it's mind?

Korval
08-12-2008, 01:05 PM
How difficult would it be to fork like the xorg folks did when the oversight organization lost it's mind?

Um, very?

ATi, nVidia, and Intel are holding all the cards here. You can't just end-run around them like with Cairo.

They're the ones with the in-depth knowledge of how to write graphics drivers. They're the ones with the ability to provide certified drivers, which are the only kinds of drivers that Vista64 will accept (for good reason, mind you). Likely, Windows7 will not accept uncertified drivers of any kind, again for good reason.

We're beholden to these three companies for their interface to their hardware. Yes, even for Larrabee; I doubt a few hobbyists are going to come close to Intel's drivers in performance.

So no, we can't do an end-run around them.

PkK
08-12-2008, 01:13 PM
OpenGL 3.0?
You mean 2.2, that's more like it.
OpenGL 2.2?
You mean 1.8. We've been through "major version" once allready.


Well, GL 2.0 delivered some major improvements, most important shaders. IMO using the number 2.0 was okay. However 3.0 isn't. It doesn't even give geometry shaders. 2.2 would have been the right number for this. A long time ago would have been the right release date.

Philipp

CrazyButcher
08-12-2008, 01:31 PM
I wonder why everyone is so obsessed with geometry shaders, if it was stated multiple times that the hardware just isn't good enough yet, also think of the game consoles being fixed at pseudo sm3 level for a few years to come, and that will be the main relevance for games.
I'd rather see gl3.0 possible on sm3 hardware for that reason, than cutting to the smaller market of sm4+ only. However I have no idea whether texturearrays and all that would be possible on sm3.

Korval
08-12-2008, 01:36 PM
I wonder why everyone is so obsessed with geometry shaders, if it was stated multiple times that the hardware just isn't good enough yet

Personally, I don't care. Transform feedback was the part of geometry shaders that I ever cared about. It's the lack of uniform buffers (which is something that implementations could lie about and "implement" in non-conformant hardware) that honks me off.


However I have no idea whether texturearrays and all that would be possible on sm3.

They aren't. The ARB picked the most meaningless D3D 10 features (seriously, I have no idea why anyone cares about integer textures) to give to GL "3.0", while missing the most important (uniform buffers).

MZ
08-12-2008, 01:51 PM
I have just dusted off old document, titled "SIGGRAPH 2007 bof".

Page 9:

OpenGL 3 is a reality
- You all will get a t-shirt during the party!
* OpenGL 3 is a great increase in efficiency of an already great API
* OpenGL 3 provides a solid, consistent, well thought out basis for the future
* OpenGL 3 is a true industry effort with broad support
* The spec is almost ready
- Michael Gold and Jon Leech are the spec editors
* The ARB will finalize open issues end of August


I think this deserves a minute of silence.

.

.

.

.


Ok. How about splitting GL to OpenCADL and OpenIBHL?

(IBH = Id+Blizzard+Hobbyst)

Korval
08-12-2008, 02:03 PM
I think this deserves a minute of silence.

I think it deserves a laugh. It's incredibly funny to look back at how optimistic the ARB was about Longs Peak a year ago, and how pessimistic they are about it now.

LogicalError
08-12-2008, 02:06 PM
I think this deserves a minute of silence.

Too bad the ARB thought it deserved a CONE OF SILENCE.

TroutButter
08-12-2008, 02:07 PM
No object model? No geometry shaders + transform feedback? No constant (uniform) buffers?

WTF were they doing? Spanking to pr0n all day?

knackered
08-12-2008, 02:49 PM
I see wikipedia's been updated already....it's the second result from typing "OpenGL 3" into google, and wouldn't look good to any prospective user of OpenGL. Their next search would undoubtedly be "Direct3d". SGI must be turning in their grave.
http://en.wikipedia.org/wiki/OpenGL#OpenGL_3.0

Timothy Farrar
08-12-2008, 02:51 PM
It's the lack of uniform buffers (which is something that implementations could lie about and "implement" in non-conformant hardware) that honks me off.

Why use uniform buffers when you have vertex texture fetch? Divergent uniforms (indexed uniform access) still causes problems on lots of hardware.

Chris Lux
08-12-2008, 02:52 PM
WTF were they doing? Spanking to pr0n all day?
five times a week... not to forget the great effort they put in ;).

Chris Lux
08-12-2008, 02:54 PM
Why use uniform buffers when you have vertex texture fetch? Divergent uniforms (indexed uniform access) still causes problems on lots of hardware.
because uniforms reside in a completely different (read completely different performance) on chip memory.

k_szczech
08-12-2008, 02:56 PM
WTF were they doing? Spanking to pr0n all day?
I have another theory:
LINK (http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=243466#Post243466)

You can also check out my cocept of OpenGL logo in previous post :p

Timothy Farrar
08-12-2008, 03:16 PM
because uniforms reside in a completely different (read completely different performance) on chip memory.

Yes perhaps (might not be the case on all hardware now), but regardless fetching uniforms still uses main memory bandwidth, and divergent uniforms use much more bandwidth. Of course I'm not fully sure on ATI hardware.

I agree that uniforms make sense in non-divergent cases.

Korval
08-12-2008, 03:39 PM
Why use uniform buffers when you have vertex texture fetch?

Because vertex texture fetch:

1: Is slow.
2: Requires putting data in textures, which is very slow and doesn't have cool mapping features.
3: Is slow.
4: Is not transparent. That is, implementations cannot "pretend" to support it by just using server-side memory and doing uploads as normal (that's what Longs Peak intended when they made it a core feature).
5: Is slow.

To me, the reason why we should have uniform buffers is:

1: To allow for a separation of compiled programs from the uniforms bound to those programs. Longs Peak was going to have this, but it was not done for "3.0".

2: To allow for "constant" values, values that are unchanged between different programs. Camera matrices, projection matrices, etc.

knackered
08-12-2008, 04:03 PM
of course we had all this under the hood, so to speak, with the now-depreciated built-in uniforms, so we are in fact now worse off regarding this problem.
Why didn't they just give us the ability to specify our own named cross-shader uniforms using the same driver path as the built-ins. If the uniform upload happened each time a draw call is made for a particular shader, then so be it - but it would be an acceleration opportunity in the future.
Just plain short-sightedness no matter which way you look at it.

pudman
08-12-2008, 04:13 PM
@Timothy Farrar

Your blog post listing the new core features of OpenGL 3.0 is a nice collection of what most developers actually care about. I expect a similar list from the BOF tomorrow.

The list would be more useful if it compared the new "features" of 3.0 to DX10 (and 10.1 and 11).

The BIG problem everyone has is that the 3.0 feature set is not much different from 2.1 + extensions. There's just a slightly greater chance that ATI will put out a driver supporting those features.

Does this new 3.0 really change the way you're going to develop GL code? Does 3.0 resolve any "fast-path" issues?

If you are a developer on Windows what would be the deciding factors for choosing OpenGL over D3D?

I think the ARB should work evolving existing OpenGL down the known path and really pushing its evolution in parallel. If they had done this then maybe we would have had this "3.0" last year (labeled 2.2) and a new 2.3 today along with the *real* 3.0 as well.

I've been on a project that had great visions of recreated itself faster, better, more X, more Y. It failed not just because it underestimated the time required to do a whole rewrite but because they didn't evolve the current version in parallel. They put the current version into "patch mode" while concentrating on the new version. (It didn't help that all the experienced team members had left the company by this point)

My point is that this didn't have to be a failure. This "3.0" should have been out LONG ago, there shouldn't have been The Great Silence and the ARB should own up to these failures.

You honestly don't see what was lost with this "upgrade"?

Timothy Farrar
08-12-2008, 04:15 PM
Vertex texture fetch is definitely not slow on supported ATI hardware, as well as Geforce 8 series and beyond. Legacy yes, but not any more. Also correct me if I am wrong here, but uniform buffers couldn't be back ported to the legacy hardware anyway (lack of hardware support).

Not that I am disagreeing with the usefulness of uniform buffers with respect to fixed non-divergently indexed constants.

Another thing to consider here is what your performance bottlenecks are. Are you and others actually bottlenecked by your uniform usage? Say you had uniform buffers, would your application run any faster?

Korval
08-12-2008, 04:26 PM
Vertex texture fetch is definitely not slow on supported ATI hardware, as well as Geforce 8 series and beyond.

It's slower than the 1 cycle it takes to use an actual uniform; therefore, it's slow.


lack of hardware support

Sure they can. The implementation lies. They do it all the time.

You create a buffer object for the purpose of storing uniforms (there's a special "hint" for that). The implementation, instead of allocating video memory, allocates system memory. You upload to it. The implementation then uses that system memory buffer to update the actual uniforms when shaders using that buffer are rendered.

It's dead simple.


Another thing to consider here is what your performance bottlenecks are.

Um, no, I don't. The API is clearly wasting time with me constantly updating uniforms that, as far as my code is concerned. Whether it is a significant weight on overall throughput isn't the issue; the issue is that my code has to create instancing which wastes both my time and the APIs.

And you don't need to profile things when "smart" implementations like nVidia's recompiles your shader because you changed a uniform from 0.0 to 0.5. Uniform buffers and program instancing would cut out all of that nonsense.

Mars_999
08-12-2008, 04:29 PM
Can someone explain to me a few items here,

2. What are you all talking about when glMatrixMode() and what about using glTranslatef() glRotatef() ect... do we need to keep track of matrices ourselves now? Like DX?

Looks like most (if not all) fixed function vertex stuff is going away in the future including glMatrixMode(), etc. Best to read the spec, on page 404 (section E.1. PROFILES AND DEPRECIATED FEATURES OF OPENGL 3.0). So if you want to do matrix math you do it application side outside GL. Then either pass the matrix in as a uniform, fetch it via vertex texture fetch, or have it sent to you via vertex buffer (ie if you had one matrix per vertex).

As for profiles, I think you may find this useful,
http://www.opengl.org/registry/specs/ARB/wgl_create_context.txt

HGLRC wglCreateContextAttribsARB(HDC hDC, HGLRC hshareContext, const int *attribList);

"If the WGL_CONTEXT_FORWARD_COMPATIBLE_BIT_ARB is set in WGL_CONTEXT_FLAGS_ARB, then a <forward-compatible> context will be created. Forward-compatible contexts are defined only for OpenGL versions 3.0 and later."

So let me get this cleared up, basically GL3.0 will be like DX in this regard as you will have to keep track and do the matrix maths yourself.

MZ
08-12-2008, 04:40 PM
(seriously, I have no idea why anyone cares about integer textures)
I do care. If you try to use ordinary texture to store integer data, you are entering gray area. Say, you are using 8-bit per component texture format. To unpack sampled value to integer range in shader, would you multiply it by 255 or by 256? If the fixedpoint->floatingpoint conversion rules from spec were your guidance, you should use the former. In practice, you get wrong result in your shader, and it's the latter that actually works.

knackered
08-12-2008, 04:45 PM
I don't understand why you're having a hard time accepting this, mars. Your shader might not even use matrices.

Korval
08-12-2008, 04:49 PM
If you try to use ordinary texture to store integer data

See right there? That's my issue: why would you want to store integer data in a texture?

Unless you're doing GPGPU stuff (in which case, a graphics library shouldn't care about your needs).

Timothy Farrar
08-12-2008, 04:51 PM
The list would be more useful if it compared the new "features" of 3.0 to DX10 (and 10.1 and 11).


I'm still working on a list of what is missing, which isn't much when you think about DX10. As for DX11, its spec isn't even finished yet.


The BIG problem everyone has is that the 3.0 feature set is not much different from 2.1 + extensions. There's just a slightly greater chance that ATI will put out a driver supporting those features.

Exactly, now that those 2.1 + extensions have been ratified as core, we can finally see driver support from other vendors. This alone is very important.


Does this new 3.0 really change the way you're going to develop GL code? Does 3.0 resolve any "fast-path" issues?

Sure it does, finally have cross platform support for a majority of the current GPU features. There are a tremendous amount of things made possible by unified shaders and other related functionality. As for fast path, I'm personally targeting DX10 level and up, and in IMO the fast path on that hardware is very well defined if you are keeping up with the hardware design, simpling having hardware support in the API is enough for me. Regardless of API (DX,GL,PSGL,GCM) you are always going to have to profile on the target hardware to know performance.


If you are a developer on Windows what would be the deciding factors for choosing OpenGL over D3D?

Think the fact that GL3 provides DX10 level features on XP (assuming ATI and Intel build GL3 drivers for XP), would be enough. For smaller developers, having a larger market share (ie add in Apple as well) could be very important.


My point is that this didn't have to be a failure. This "3.0" should have been out LONG ago, there shouldn't have been The Great Silence and the ARB should own up to these failures.

You honestly don't see what was lost with this "upgrade"?

Personally I see no reason to complain about that which cannot be changed and just is. Be happy for what you have, and do the best to use it to your advantage.

MZ
08-12-2008, 05:11 PM
See right there? That's my issue: why would you want to store integer data in a texture?

Unless you're doing GPGPU stuff (in which case, a graphics library shouldn't care about your needs). I'm using a texture to store indices, which then I use to access certain indexed resource in shader. Sort of permutation. And I'm using it to render shadows, not GPGPU stuff.

Korval
08-12-2008, 05:31 PM
Exactly, now that those 2.1 + extensions have been ratified as core, we can finally see driver support from other vendors. This alone is very important.

No, it is not.

Intel won't be supporting jack. They've never had good OpenGL support, and GL "3.0" isn't going to change that. Reliance on Intel's GL implementation is flat-out stupid.

ATi may claim support for OpenGL, but it is a minefield. You never know when an ATi driver will crash or choke on some shader. Worst still, you never know when it will choke on some shader after you ship.


Sure it does, finally have cross platform support for a majority of the current GPU features.

No, those are features. They don't resolve fast-path issues.


I'm personally targeting DX10 level and up

Well golly gee wilikers, isn't that nice for you. The rest of us recognize that there are millions of DX9 cards out there that need support too.


in IMO the fast path on that hardware is very well defined if you are keeping up with the hardware design

Really? Then does ATi's hardware support normalized unsigned shorts as vertex attributes? Does nVidia's? How many attributes can be separated in different vertex buffers on their hardware? What "hardware design" should we be keeping up with to answer these questions?


Think the fact that GL3 provides DX10 level features on XP (assuming ATI and Intel build GL3 drivers for XP), would be enough.

Vista marketshare is only going one way: up. That fact might have been useful a year ago, or two years ago. But that ship has sailed.


For smaller developers, having a larger market share (ie add in Apple as well) could be very important.

But that would be much more expensive, which smaller developers can't afford. They're doing good to test on XP and Vista. You'd be asking them to test on XP, Vista, and MacOS X. Not to mention having to develop for MacOS X to begin with.


Be happy for what you have, and do the best to use it to your advantage.

That's like saying that it's OK that you were promised a steak dinner and are given dog poo. At least you aren't starving; after all, you've got that nice dog poo.

pudman
08-12-2008, 06:27 PM
If you are a developer on Windows what would be the deciding factors for choosing OpenGL over D3D?
Think the fact that GL3 provides DX10 level features on XP (assuming ATI and Intel build GL3 drivers for XP), would be enough. For smaller developers, having a larger market share (ie add in Apple as well) could be very important.

Let me rephrase, is cross platform compatibility the ONLY reason you stay with OpenGL? Can you iterate the features in OpenGL (we're talking programming features AND hardware supported features) that you'd be without if you switched to D3D?

OpenGL will always be useful for its cross platform nature, there's no disputing that. Regardless of whether the ARB continues to drag its heels, we multiplatform folk have no alternative. Surely you can convince me that I'm missing something?

Simon Arbon
08-12-2008, 06:41 PM
"ByteCode GLSL, " - If you really get into the internals of both the order and (especially) newer hardware you will see that a common byte code for all vendors is a bad idea because hardware is way too different. All vendors would still have to re-compile and re-optimize into the native binary opcodes. So all you would be saving is parsing strings into tokens which really isn't much of a savings.
You're forgetting the #1 advantage of the token approach: Updated drivers will not break your shader compiles. Currently, nVidia can fix a bug in its parsing that renders a previously parsable program illegal, or a program can be illegal on ATI but legal on nVidia, while using just basic features but slightly out of spec syntax. This problem, which is a major one, DOES SIMPLY NOT EXIST on D3D and is the major reason why D3D has you compile the shaders into tokens first.
Thats why i put Bytecode and Binaries as seperate items, i want BOTH.
ByteCode GLSL would just parse the strings into a more compact format that was still high-level language (something like P-code would be ideal)
As ektor said, the main advantage is that you dont get unexpected syntax errors when the customer recompiles it with a different driver.
Other advantages are:
2/ Some optimisations can be done at the tokenisation stage such as dead code removal and combining variables with different scope into one register.
3/ The bytecode is smaller and faster to load (especially for those that have hundreds of shaders)
4/ Those of us who prefer languages that are not 'C' can write our own front-end in our language of choice.
5/ The hardware vendors only need to write the back-end compilor.
6/ The load or run-time compilation will be slightly faster.
7/ As the bytecode has been pre-optimised you will get a better assesment of which hardware it will run on.
8/ The source code is not distributed so those who like trade secrets can make it harder for others to see what they did.


Due to all the different hardware, shaders in the form of pre-compiled binaries really only makes sense in the form of caching on a local machine after compile
Thats exactly what i want it for, i want to get the driver to pre-compile the shaders at installation and not have to recompile them until the hardware or driver changes.

Korval
08-12-2008, 06:45 PM
As ektor said, the main advantage is that you dont get unexpected syntax errors when the customer recompiles it with a different driver.

Unless the bytecode interpreter has a bug in it. Granted, this is less likely than making a parser bug, but it can still happen.


4/ Those of us who prefer languages that are not 'C' can write our own front-end in our language of choice.

Technically, there's nothing stopping you from doing that now. You'd just be writing glslang code rather than assembly.


8/ The source code is not distributed so those who like trade secrets can make it harder for others to see what they did.

Since the bytecode format would have to be very public, it would only make it slightly harder.

Ilian Dinev
08-12-2008, 07:05 PM
To add some facts about Intel's GPUs:
- do not support textures larger than 512x512
- do not support anything DX or OpenGL.
- crash randomly even on simplest, cleanest, tutorial-code
- cull triangles randomly, switch to wireframe-mode randomly, generates bogus triangles randomly
- not even one of the hundreds of queries to DX and OpenGL caps return valid results. I.e it states texture-size up to 2048; as you continue verifying query-results, things become even more horrifying.
Overall, Intel's cards are absolutely useless for any accelerated 3D or even 2D. Only GDI works - by a software-fallback from MS, definitely.
The millions of Intel IGPs sold do not show whether that miserable ancient silicon stays enabled. The price difference when buying a mobo is $4; people that tried to run games or CAD on it definitely saw they need a real GPU for $20+.

I tried to add support for Intel cards to my commercial software [just ortho 2D stuff, using SM1 shaders or FF in DX9 or OGL] - but it proved impossible.




ATi are quite silent about GL3, are missing from the GL3 credits that I saw, and have no extensions in the spec. So it's safe to think they'll completely ignore it. Also, it's funny how the "extensions, promoted to core" seem to be largely vendor-specific and thus could be guaranteed to be missing in most cases. Apple's VARs and the nv-half-float vtx-attrib, for instance. It kind of looks like GL3 is only bound only to nVidia+Apple. So much for a cross-platform API.


OpenGL2.1's driver model (FIFOs) on WinXP is great, imho. So, how about providing 2 GL2 renderers for WinXP users (one SM3 model, another with SM4), and a DX10 one for Vista? Cg will be invaluable in such a model. If game/gui features in your software can be wrapped like that, you'll be giving users a lot of freedom on using their favorite OS and gpu.

Simon Arbon
08-12-2008, 07:13 PM
"Tesselator" - Lack of current cross platform hardware support, no reason to think about this now.If its going to be in DX11 then we should get an extension for it when the DX11 driver is released, if not before.


"Post-rasterisation instancing (for multipass deferred shaders)" - What? I think you need to describe what you are looking for here, any why you cannot do this type of thing with current GL3 functionalityAt the moment i do my first pass normally and then do several passes using a screen-aligned quad for post-processing effects like motion blur.
This extension would allow me to specify several fragment shaders that are to be run as seperate passes without needing to setup screen-aligned quads or run the vertex processor every time.
The 2nd and following shaders would simply be run for each pixel of the framebuffer (with the framebuffer/G-Buffer data being prefetched 'in' varyings instead of requiring a texture lookup)
This is purely a way to make deferred shading more efficient and more intuitive.


"Updatable resolution LOD textures" - What do you mean here? I have a 9 level 256x256 mipmap texture loaded for a background object.
It comes twice as close to the camera so i now need a 10 level mipmap.
I want to be able to stream the new 512x512 texture level onto the card and tell the card firmware to combine it with the existing levels to create a new MipMap that then replaces the old one.
And also remove a mipmap level when objects move away again.

TroutButter
08-12-2008, 07:18 PM
I agree Intel's GPUs suck and I think always will, but the intel GPU in my Mac mini has been working fairly well so far. OSX uses OpenGL in the desktop rendering which works fine and I can play Quake 3 on it and it works perfect. Quake 3's graphics engine isn't some really basic code ripped from a tutorial either. But, this may be due to apple going into the drivers and fixing Intel's incompetence with the graphics themselves, I'm not sure.

I also agree that AMD/ATI needs to pull their heads out of their asses and make a GL driver worth a damn.

Korval
08-12-2008, 07:27 PM
ATi are quite silent about GL3, are missing from the GL3 credits that I saw, and have no extensions in the spec. So it's safe to think they'll completely ignore it.

Maybe. Or, maybe not. (http://www.starcraftwire.net/n/927/blizzard-and-amd-join-hands)

Blizzard, being a MacOS developer, uses OpenGL. They may have a D3D rendering mode for Windows, but they will be using OpenGL. If ATi/AMD is pledging their support, then at least that means that they'll be taking GL "3.0" seriously, to some degree.


Apple's VARs and the nv-half-float vtx-attrib, for instance.

Um, what? VAO (not VAR) can be entirely server-side (aka, a lie); there isn't and never will be a hardware-equivalent. What it does is allow the implementation to do the necessary vertex format checks once, instead of every time you draw with that vertex format.

And the half-float stuff is probably supportable in ATi hardware too.

I would point out that, while ATi may not have written any of the specs, they still voted for it.


This extension would allow me to specify several fragment shaders that are to be run as seperate passes without needing to setup screen-aligned quads or run the vertex processor every time.

You really think that the 4 vertices you use for your screen-aligned quad takes up any real time? I mean seriously now.


I want to be able to stream the new 512x512 texture level onto the card and tell the card firmware to combine it with the existing levels to create a new MipMap that then replaces the old one.

Yeah, you can forget that.

Mars_999
08-12-2008, 07:28 PM
I don't understand why you're having a hard time accepting this, mars. Your shader might not even use matrices.

Not following you here? My shader doesn't use matrices as of now, but that isn't what I am referring to, what I am referring to is this



glTranslatef();
glRotatef();

//now new way with GL3.0?
Matrix4x4 translate;
Matrix4x4 rotate;
Matrix4x4 result;
result = translate * rotate;
glMultMatrixf(result.matrix);



That is what I am getting at... This is how DX does things you had in DX9 some GL type functions for moving objects around and such, but the main idea was to do the latter in the above code, and if I am understanding correctly this will be the new way in GL3.0... I don't care if it is, I just wanted to clear it up...

Korval
08-12-2008, 07:35 PM
No, that is not how you do things. This is:



//Full GL "3.0":
glTranslatef();
glRotatef();

//GL "3.0" with deprecated features removed.
Matrix4x4 myMat = //Get some modelviewprojection matrix.
glUniform4fv(<Insert your matrix uniform here>, &amp;myMat[0]);
glUniform4fv(<Insert your matrix uniform here> + 1, &amp;myMat[1]);
glUniform4fv(<Insert your matrix uniform here> + 2, &amp;myMat[2]);
glUniform4fv(<Insert your matrix uniform here> + 3, &amp;myMat[3]);


That is, you have to do everything yourself. You must create a uniform in your glslang shader to represent the matrix in the form you want it in. You must load that uniform yourself for each program that uses it. You must change that uniform in each appropriate program if it's value changes. And so on.

There are no built-in uniforms anymore at all.

And there's still the "3.0" full context if you don't want to get rid of the cruft.

santyhamer
08-12-2008, 10:01 PM
First, I must say i'm pretty dissapointed about that spec. You promised a lot but I have the impression you gave us really an "OpenGL 2.2". On the other hand, i'm still waiting a method to manage shaders like DX does(effects)... We know about glFX ... but what's its real state today?

Second, the ARB's silence was not good at all. The lack of news, information and uncertainty only made DX10 stronger and stronger. For one year I got the impression that OpenGL was completely abandoned. That's not any good.

Here, we moved all our PC pipeline to DX10... just because Microsoft keeps us informed, launches a SDK revision almost each 3 months, is more productive, easier due to the lack of "capabilities" and unified model(dx10.1 is a divergence in the force though!)... For the other systems (MacOSX,linux) we have no choice... but it's bad because we lack some functionality(multithreading pain, not standardized geometry shaders, too many paths to take, sRGB blending nightmare, etc).

OGL lost all its innovation... When DX3 appeared OGL was the king... it has a lot of things that DX lacked, a better/simpler interface, more support, a solid community, etc... then, it suddenly started to loose terrain... and now the situation is the inverse. Today, DX10 is much more innovative, better designed, supported and manageable than OGL... just see the # of the commercial engines for PC-Windows games using OGL and the # of the DX ones... clearly OGL is loosing the battle... each day that pass a bit more.



If the alpha test is immutable, is the alpha ref value also?

???? Alpha test should just NOT "be". I mean it must be killed without any piety as DX10 did, so it's completely managed in the pixel shader.



..We decided on a formal mechanism to remove functionality from the API...deprecation

Ok, but I think the deprecation step just adds complexity to the OGL3 programming ( more tests to perform, more paths to follow, more IFs, etc... ) I think it's better to make a shock change like DX10 did. Change radically and forget the 1992's model! I think to kill the current obsolete model and to start one new and modern from zero will be the best.

Btw, do you know that DX11 gonna be presented this year, don't you? I would like to know what the ARB/Khronos gonna do with the hull, domain, tessellators and computing shaders... also how you're gonna fight with the new(old really) and very flexible software render model of Larrabee and how are you going to define the data exchange for OpenCL(CL not GL, read well :p).

returnofjdub
08-12-2008, 10:29 PM
I think to kill the current obsolete model and to start one new and modern from zero will be the best.

You and everyone else here. We told them this is what we wanted, over a year ago they said that's what they were going to do, then they were silent for a year, then in January 2008 they decided to disregard everything they'd told us and we were expecting and asked for, and they waited until August 2008 to tell us that they totally scrapped that plan, knowing full well that we were still expecting a clean, modern, API, and gave us an incremental update that's even messier than before.

Khronos Group, give us one good reason to believe a single word that comes out of your mouths about OpenGL's future.

Korval
08-12-2008, 10:45 PM
Btw, do you know that DX11 gonna be presented this year, don't you? I would like to know what you're gonna do with the hull, domain, tessellators and computing shaders...

What the ARB is good at: shipping stuff a day late and a dollar short. Don't expect to see those features in GL for the next year, minimum.


Let me rephrase, is cross platform compatibility the ONLY reason you stay with OpenGL? Can you iterate the features in OpenGL (we're talking programming features AND hardware supported features) that you'd be without if you switched to D3D?

It's funny how the pro-GL "3.0" crowd completely ignored this most important of questions.

Not one person has stated any reason for using OpenGL other than its cross-platform compatibility. In short, if more people could abandon OpenGL, they would.

Good job, ARB: you've succeeded in making an API that should only ever be used if you have no other alternative.

I just want to see how much more epically the ARB can fail. I want to see GL 3.1 with 5 profiles, 2 gigantic deprecated sections, and a spec so gigantic and convoluted that God himself could not implement it.

Oh, and of course, nothing removed from the core.

MagicWolf
08-12-2008, 11:19 PM
Explain to me please, what means "Deprecation"?

Let I have program which is created with use OpenGL. And in this program are used "Deprecated features", i.e. at start of this program on the computer with OpenGL v3 or is higher, it will not be possible to work this program?! I.e. if there is a driver with OpenGL v3 it(he) will not provide compatibility with previous versions OpenGL. Thus, probably many programs which for any reasons may not be changed, will not be started in the future?!

Zengar
08-12-2008, 11:25 PM
Explain to me please, what means "Deprecation"?

Let I have program which is created with use OpenGL. And in this program are used "Deprecated features", i.e. at start of this program on the computer with OpenGL v3 or is higher, it will not be possible to work this program?! I.e. if there is a driver with OpenGL v3 it(he) will not provide compatibility with previous versions OpenGL. Thus, probably many programs which for any reasons may not be changed, will not be started in the future?!

Panic, panic, panic... it just means this features should not be used anymore and will be removed in future versions. You can also create a GL3.0 context that does not support the deprecated features. IT WILL NOT AFFECT CURRENT PROGRAMS, as they are compiled for an older GL version which will still be supported by the driver!

Michael Gold
08-13-2008, 12:16 AM
A key requirement of the deprecation model is that you must always opt-in to an incompatible version. Enter WGL_ARB_create_context.

The legacy wglCreateContext is capped at 2.1, so all existing apps will continue to run as long as vendors support a 2.1 driver. I imagine this support won't go away anytime soon, as 100% of existing apps require the existing drivers.

In order to create a 3.0 or newer context, you must use wglCreateContextAttribsARB(). This allows you to specify a required level of compatibility. For example, if 3.1 remains compatible with 3.0 but 3.2 removes deprecated functionality, a create request of 3.0 could give you 3.0 or 3.1 context, but never 3.2.

MagicWolf
08-13-2008, 12:24 AM
Panic, panic, panic... it just means this features should not be used anymore and will be removed in future versions. You can also create a GL3.0 context that does not support the deprecated features. IT WILL NOT AFFECT CURRENT PROGRAMS, as they are compiled for an older GL version which will still be supported by the driver!

Thank for the answer. If I have correctly understood you, it is possible to create a context with "deprecated features" and a context without "deprecated features". How, it can be made? What functions and in what sequence should be named? Where the description of it?

Zengar
08-13-2008, 12:28 AM
WGL_ARB_create_context, as Michael wrote in the previous post.

Still, wait till the 3.0 drivers are aviable, there will probably be some tutorials and docs too.

MagicWolf
08-13-2008, 12:44 AM
Many thanks, I have understood as to choose a correct context. Not clear there was only one thing. If I have chosen a context of version 3.0 all "deprecated features" are not supported, or the part of them is supported? Or I simply should understand, what if have chosen a context of version 3.0 all "deprecated features" should not be used, they are not important supported whether or not? In a context 2.1 functions 3.0 will be supported?

Zengar
08-13-2008, 01:23 AM
Ok, once again:

- if you create a normal 3.0 context, all features are supported (even deprecated ones)
- if you create a forward-3.0 context, deprecated features are not supported


I don't understand the " In a context 2.1 functions 3.0 will be supported?" questions. If your implementation only supports 2.1 but not 3.0 then it won't support 3.0 but this should be clear?

Michael Gold
08-13-2008, 01:27 AM
OK i need to clear up the confusion on the word "deprecated".

Version 3.0 is fully backward compatible with all versions since 1.0. However significant legacy functionality has been deprecated, i.e. marked for removal in the future. This allows you to start using all the new features today, without having to rewrite a single line of existing code. But it serves as fair warning, the next version may remove some or all (or none) of the deprecated features, depending on the mood of the ARB at that time. So if you want to use whatever new features come in later versions, you have advance notice to start rewriting your legacy code now.

To help prepare for future incompatibility there is a "forward compatible" (or "lite") bit you can set during context creation, and that effectively disables the deprecated functionality. This is not intended as a delivery vehicle for your application (see below), but simply for testing that your code is "clean" of deprecated functionality.

Here's why I believe you should avoid shipping code which requires a lite context. By their very nature they are unlikely to maintain any degree of compatibility from one release to the next. If a vendor is up to version 3.2, it might be burdensome to support all of 3.0-lite, 3.1-lite, and 3.2-lite - hence maybe only the most recent version will be supported for that driver release.

In theory one could see a perf gain in a lite context, but in practice... well, we'll have to see.

MagicWolf
08-13-2008, 01:43 AM
I don't understand the " In a context 2.1 functions 3.0 will be supported?" questions. If your implementation only supports 2.1 but not 3.0 then it won't support 3.0 but this should be clear?

Let I have driver with support OpenGL of version 3.
I am right or not right:
- If has called wglCreateContext always by default, version OpenGL is equal 2.1.
- If I have called wglCreateContextAttribsARB can use given version OpenGL.

Ilian Dinev
08-13-2008, 01:53 AM
Version 3.0 is fully backward compatible with all versions since 1.0. However significant legacy functionality has been deprecated, i.e. marked for removal in the future.

Michael, the glLineWidth(width=2..4) and glPolygonMode(GL_FRONT,GL_LINE) have been deprecated in GL3. The same features were dropped in DX10 afaik, does this mean that modern cards don't support that functionality natively? With the limited benchmarking I did on GF7x00 and GF8x00, there was just a 2x drop in performance (which is still several times better than the alternatives). Or maybe it's unsupported only on ATi cards (I have vague memories of huge performance drops on their older hardware). Posts by informed CAD programmers and users suggest/state, that wide-lines ARE accelerated.

PkK
08-13-2008, 02:35 AM
To add some facts about Intel's GPUs:
- do not support textures larger than 512x512
- do not support anything DX or OpenGL.
- crash randomly even on simplest, cleanest, tutorial-code
- cull triangles randomly, switch to wireframe-mode randomly, generates bogus triangles randomly
- not even one of the hundreds of queries to DX and OpenGL caps return valid results. I.e it states texture-size up to 2048; as you continue verifying query-results, things become even more horrifying.
Overall, Intel's cards are absolutely useless for any accelerated 3D or even 2D. Only GDI works - by a software-fallback from MS, definitely.


No. Intel cards are okay. Maybe Intel's Windows drivers suck (I don't know since I don't use Windows often and then even less often with an Intel card). Intel cards do well with good drivers.
I am not running the latest drivers, but I can't really complain about missing features or instability on my GNU/Linux system. Just to give examples: GLSL 1.20 is there. Max texture size is 2048x2048 and they work. The development version of the drivers supports GL 2.1.

Philipp

P.S.: I see the thread subject clamped to " The ARB announced OpenGL 3.0 and GLSL 1.30 tod". "Tod" is German for death.

knackered
08-13-2008, 02:52 AM
apparently 3.1 will be released in 6 months now, due to the outrage:-
http://www.theregister.co.uk/2008/08/13/opengl_firestorm/

bobvodka
08-13-2008, 02:58 AM
yay! Links to my OpenGL thread.. that's /. and theregister linking to it now; do I get geek points for that?



Graphics and games engineers angered by the delayed OpenGL spec and threatening to adopt Microsoft's DirectX have been asked to hold out a little longer for promised changes.


They are having a laugh right?
"Hey, you waited two years and didn't get what you want, but wait a little longer and some more will change... [although you still wont get what you wanted]".

Are these people living in some sort of dream world?
Why on earth, having waited 2 years for nothing, would we wait again now?

I have to thank the ARB for giving me a good laugh this morning...

mbien
08-13-2008, 03:09 AM
I hope someone sets up a webcam in the GL BOF, I am really interested in the Q&A at the end of the BOF ;).

Carl Jokl
08-13-2008, 03:31 AM
I must say that I like the idea of OpenGL as a platform and as a hobbiest am not bound to have to deliver anything in a given time frame. I may investigate DirectX as I have already been doing before this anouncement but still have not given up hope that OpenGL could make a come back.

I do somewhat understand the problems here. OpenGL is used primarily by the CAD industry and for non game sectors. There is a risk in overhauling OpenGL to suit the game development community whom for the most part arn't even using it anyway at the cost of causing problems for the CAD community who are using it. There may be an element of catch 22 in this. But consider the scenario which might not be far fetched. OpenGL is overhauled just as the game developers would like. However there is little movement by the gaming industry to use OpenGL. Meanwhile the CAD industry is alienated. The CAD industry then stops using OpenGL except where it has to. The potential is that instead of attracting people back to OpenGL instead those who are currently using it are lost. That said if the CAD industry were to move to a different API it would be pretty contradictory to them saying that they can't cope with the API changing rapidly.

I think much as it has been said that if people cannot migrate from legacy OpenGL then they should just use that and not the latest version. It is futile to stand in the way of progress forever. Resistance is fultile.

I get the impression from what Carmack said that he too likes the idea of OpenGL as a cross platform API and is dissappointed in how things have turned out.

Xmas
08-13-2008, 03:47 AM
ATi are quite silent about GL3, are missing from the GL3 credits that I saw
Read them again (Hint: search for "AMD" in the spec).


I have a 9 level 256x256 mipmap texture loaded for a background object.
It comes twice as close to the camera so i now need a 10 level mipmap.
I want to be able to stream the new 512x512 texture level onto the card and tell the card firmware to combine it with the existing levels to create a new MipMap that then replaces the old one.
And also remove a mipmap level when objects move away again.

You can actually do that already, using TEXTURE_BASE_LEVEL and TEXTURE_MAX_LEVEL.



//GL "3.0" with deprecated features removed.
Matrix4x4 myMat = //Get some modelviewprojection matrix.
glUniform4fv(<Insert your matrix uniform here>, &amp;myMat[0]);
glUniform4fv(<Insert your matrix uniform here> + 1, &amp;myMat[1]);
glUniform4fv(<Insert your matrix uniform here> + 2, &amp;myMat[2]);
glUniform4fv(<Insert your matrix uniform here> + 3, &amp;myMat[3]);

Certainly you should use glUniformMatrix4fv instead.

skynet
08-13-2008, 04:15 AM
If a vendor is up to version 3.2, it might be burdensome to support all of 3.0-lite, 3.1-lite, and 3.2-lite - hence maybe only the most recent version will be supported for that driver release.

This reads like "the deprecation model is an additional burden to the driver developers". Doesn't sound like the intended purpose.


In theory one could see a perf gain in a lite context, but in practice... well, we'll have to see.

A guaranteed performance boost is the ONLY reason I would switch to a lite context now. Why should I go through the pain and rewrite my rendering code, if its

a) not easier to code, because the API hasn't changed appropriately and doesn't offer enough to compensate for the features we lose (uniform buffers, state objects, new object model)
b) not going to be faster in the end?

Then I just would stick to full-3.0 and keep my beloved matrixstack- and glPushAttrib/glPopAttrib calls.

Of course, LP would have forced me to rewrite most of the rendering code also, but I would have been rewarded with
a) easier coding experience
b) faster execution
c) more stable drivers
d) wider, more reliable feature support


I really hope, none of the vendors intends to produce a full-3.0 driver. If they ever want to get rid of the cruft, they better draw a line now and start to ONLY offer "forward compatible 3.0" (i.e. lite). Maybe, if more extension (or 3.x versions) follow, lite-3.0 may become attractive enough.

Lord crc
08-13-2008, 04:32 AM
A guaranteed performance boost is the ONLY reason I would switch to a lite context now.

Way I see it, the purpose of the lite contex is to make it easier for us devs to find the One True Path of Fastness. Once you have found it, it should be the same for the full context.

Of course, since the driver has no guarantee that you won't deviate from the OTPoF it probably can't optimize as much. Which is why I think it's rather silly that they didn't introduce at least one additional profile for 3.0 (the promised lean-and-mean). This would also make it more likely, imho, that ATI could get at least some proper opengl 3.0 drivers (even though they're not full).

bobvodka
08-13-2008, 04:56 AM
I'll tell you something which makes all this a bigger slap in the face; remember before Vista was going to be released and it became apprent that OpenGL was going to get shafted and they asked us all to go to bat for them to stop it from happening?

The community did so and the problem was resolved; yet a few years on yet again we are being shafted by them.

Makes you wonder why we bothered...

pudman
08-13-2008, 04:57 AM
Here's why I believe you should avoid shipping code which requires a lite context.

You guys are funny. Trying to ameliorate our concerns you first tell us "Hey look at this cool deprecation model! You can start coding right away in the forward (lite) context!" There was even mention that the lite context would allow IHVs to squeeze more optimization out of the code because it wouldn't have to deal with any obsolete features.

Now you're saying "Um, don't depend on the forward context. It may chance from version to version." Talk about version hell.

One of your stated goals was giving notice to certain developers that various GL features would be going away. Couldn't you have simply *told* them that and saved us all the trouble of this 3.0? And seriously, what's wrong with the idea that they'd stay at 2.1? It's not like they were going to refactor their code to bring it up to 3.0 compliance. You just spent a lot of time telling us that they don't refactor they code that often, if ever.

knackered
08-13-2008, 07:37 AM
it's probably time for us all to stop sulking - and I include me in that.

Michael Gold
08-13-2008, 08:05 AM
Michael, the glLineWidth(width=2..4) and glPolygonMode(GL_FRONT,GL_LINE) have been deprecated in GL3. The same features were dropped in DX10 afaik, does this mean that modern cards don't support that functionality natively?
I can't speak for other vendors. All NVIDIA hardware since NV10 has included support for these features and the latest hardware is no different in that respect. I can't say if/when this may change.

If/when these features are actually removed from the core API, some vendors may choose to continue support via "core extensions" wherein the API for these features remains unchanged but you'll need to check for the extension before using it. Since you must opt-in to a version which removes the functionality, that gives you the opportunity to add code which checks for the extension. And of course, existing binaries will continue to function as long as 2.1 is supported.

The idea of an extension with undecorated names is likewise being applied to several new ARB extensions, so that an application can target either 2.1 or 3.0 with shared source code. For example, ARB_vertex_array_object introduces "glBindVertexArray" rather than "glBindVertexArrayARB", since the behavior exactly matches the 3.0 functionality.

Michael Gold
08-13-2008, 08:47 AM
I can't speak to other vendors plans for forward-compatible contexts. I'm not even speaking for my employer at this time. I'm only saying you should talk to the vendors whose card you want to support before shipping a product which depends on a lite context. I say this only because for any given GL version, support for the full context is required but the forward-compatible context is not. For all I know vendor X may plan to write a super-fast lite driver for each GL version and will contradict what I previously posted. As I stated the primary purpose of the lite context (to me, anyway) is a developer tool for future-proofing your code base.

PaladinOfKaos
08-13-2008, 08:57 AM
So we're supposed to develop on the lite context, and then switch to a full context when we have a more-or-less table graphics engine.

It sounds good in theory, but how do we know a driver won't do something different in a lite context or full context that causes issues during the switch?

Michael Gold
08-13-2008, 08:58 AM
Your test burden is no less than ours. :)

PaladinOfKaos
08-13-2008, 09:07 AM
mmm, true enough. I wasn't trying to be argumentative (I know it's hard to tell with the general mood around her). I think we all deserve a little skepticism about OpenGL drivers these days, especially on Windows (I've always had great luck with the NVIDIA's drivers on Linux (except on KDE4, but that's a whole different can of worms)).

My main concern would be accidentally using deprecated functionality while fixing an issue in full context mode, which could cause an unintended breakage later on. Thinking about it more, that could be avoided by verifying any new code runs in lite mode without generating any INVALID_OPERATION errors, even if it doesn't run very well in that mode due to driver issues.

Michael Gold
08-13-2008, 09:33 AM
The scenario you describe is a non-issue. If you use 3.0-lite to prepare for 3.1, and then ship on 3.0-full, even when 3.1 is available you will still get a 3.0 context because you must opt-in to 3.1. A 3.0-lite can give you a head start on coding for a future release, but you must still code to that release when its available, and test it. If you "accidentally" continue using a deprecated feature, you can find/fix that during development of the next version.

Timothy Farrar
08-13-2008, 09:42 AM
3/ The bytecode is smaller and faster to load (especially for those that have hundreds of shaders)
...
5/ The hardware vendors only need to write the back-end compilor.
6/ The load or run-time compilation will be slightly faster.
7/ As the bytecode has been pre-optimised you will get a better assesment of which hardware it will run on.


FYI, don't underestimate the work actually involved loading from byte code (GL vs DX, how much time do you actually think is taken in parsing source to tokens [GL only] vs the rest of compile [DX and GL]). And don't take my word for it, about this from,

http://ati.amd.com/developer/cgo/2008/Rubin-CGO2008.pdf

(talking about ATI's runtime shader compilier)
"
Relations to Std CPU Compiler (SC)
- About 1/2 code is traditional compiler, all the usual stuff
- SSA form
- Graph coloring register allocator
- Instruction scheduler
- But there is a lot of special stuff!

Some Odd Features
- HLSL compiler is written by Microsoft and has its own idea of how to optimize a program
- Each compiler fights the other, so SC undoes the ms optimizations
"

PaladinOfKaos
08-13-2008, 09:47 AM
The scenario you describe is a non-issue. If you use 3.0-lite to prepare for 3.1, and then ship on 3.0-full, even when 3.1 is available you will still get a 3.0 context because you must opt-in to 3.1. A 3.0-lite can give you a head start on coding for a future release, but you must still code to that release when its available, and test it. If you "accidentally" continue using a deprecated feature, you can find/fix that during development of the next version.


*slaps forehead* Just read the WGL_ARB_create_context extension. Should have done that before I opened my big mouth and made a fool of myself *sigh*. My main development takes place on Linux, so I'm more concerned with the GLX extension (whenever it comes out). I hadn't bothered to read WGL because, well, I just don't care about it.

Timothy Farrar
08-13-2008, 10:08 AM
I have a 9 level 256x256 mipmap texture loaded for a background object.
It comes twice as close to the camera so i now need a 10 level mipmap.
I want to be able to stream the new 512x512 texture level onto the card and tell the card firmware to combine it with the existing levels to create a new MipMap that then replaces the old one.
And also remove a mipmap level when objects move away again.


You are bringing up an interesting point here. This is something we take for granted on the consoles, ie ability to manage our own GPU memory and access to hardware texture layout and format, so rolling our own texture streamer isn't a problem. On the PC even with DX, this is something which really isn't possible because of lack of API support (and would be a nightmare of a portability problem), unless you do it MegaTexture style with pre-allocated textures.

Leadwerks
08-13-2008, 12:46 PM
http://www.leadwerks.com/post/dilbert_gl3.jpg

PaladinOfKaos
08-13-2008, 12:51 PM
Awesome dilbert remix :D

On a totally different note... The BOF is supposed to start in ~5 hours. Are there any plans to telecast it at all? And if not, when will the PDFs be available?

And if it won't be telecasted, it would be totally awesome if someone could find it in their heart to make a transcript of the Q&A for those of use who can't go to Siggraph.

zimerman
08-13-2008, 12:53 PM
I have a 9 level 256x256 mipmap texture loaded for a background object.
It comes twice as close to the camera so i now need a 10 level mipmap.
I want to be able to stream the new 512x512 texture level onto the card and tell the card firmware to combine it with the existing levels to create a new MipMap that then replaces the old one.
And also remove a mipmap level when objects move away again.


You are bringing up an interesting point here. This is something we take for granted on the consoles, ie ability to manage our own GPU memory and access to hardware texture layout and format, so rolling our own texture streamer isn't a problem. On the PC even with DX, this is something which really isn't possible because of lack of API support (and would be a nightmare of a portability problem), unless you do it MegaTexture style with pre-allocated textures.


Can't you control the usage of mipmap levels by the glTexParameters GL_TEXTURE_BASE_LEVEL and GL_TEXTURE_MAX_LEVEL,
and stream rarely used levels via pbuffers?

Korval
08-13-2008, 01:04 PM
And if it won't be telecasted, it would be totally awesome if someone could find it in their heart to make a transcript of the Q&A for those of use who can't go to Siggraph.

I wish I could go to SIGGRAPH and the BoF. I've got some things in my refrigerator that I'd like to throw at them.

dorbie
08-13-2008, 01:19 PM
And if it won't be telecasted, it would be totally awesome if someone could find it in their heart to make a transcript of the Q&A for those of use who can't go to Siggraph.

I wish I could go to SIGGRAPH and the BoF. I've got some things in my refrigerator that I'd like to throw at them.

Whoa there keyboard warrior. Real people can throw stuff back. I would like to see a video of this BoF too. Can someone upload a bootleg to youtube?

I like knackered's last post, time to pipe down and wait for the BoF, Neil Trevett's comments to the Register suggest they're listening. Like it or not the people who drew this spec up care more about OpenGL and do a lot more work for it than the detractors here. They're probably surprised by the reaction and might even have enjoyed the prospect of a clean break if they knew everyone was so damned adamant about it. If I was cloistered in the conf calls with them I'd never have anticipated the strength of this reaction. Let's see what the new new plan is.

Korval
08-13-2008, 01:27 PM
Whoa there keyboard warrior. Real people can throw stuff back.

It wouldn't be funny if they didn't.


Neil Trevett's comments to the Register suggest they're listening.

It's funny. They were listening when people were generally supportive of 3DLab's initial GL 2.0 effort. They were listening when people were extremely supportive of the Longs Peak effort.

Why should I care whether they're listening or not if they never do anything based on it? I don't want them to listen to anything anymore; they already know what they need to. They simply will not act on it.

Why do you think they went silent in January? They knew the crap-storm that telling us would raise. Do you honestly think that they are in any way surprised by the level of anger and bile being flung their way? And if they are, they are even more incompetent than I thought they were.

Kazade
08-13-2008, 01:33 PM
I like knackered's last post, time to pipe down and wait for the BoF, Neil Trevett's comments to the Register suggest they're listening. Like it or not the people who drew this spec up care more about OpenGL and do a lot more work for it than the detractors here. They're probably surprised by the reaction and might even have enjoyed the prospect of a clean break if they knew everyone was so damned adamant about it. If I was cloistered in the conf calls with them I'd never have anticipated the strength of this reaction. Let's see what the new new plan is.

I agree, I kinda think that they were expecting to release OpenGL 3 to a big reception with nVidia waiting in the wings to release a driver next month. I also think that might have been one of the reasons they went silent, what better way to see in the future of OpenGL than with a big surprise launch?

But obviously, they shouldn't have gone silent, they should have told all of us what was going on. But there has been a release, there is a deprecation mechanism so stuff can and will be removed which is more than we had last week. I'm looking forward to hear what happens at the BOF tonight, and I'm more looking forward to the drivers so I can get on with using it!

Chris Lux
08-13-2008, 01:39 PM
...and I'm more looking forward to the drivers so I can get on with using it!
what is hindering you to use it today? there is NOTHING new, its all there. all the functionality is there through the IHVs extensions that will also deliver the drivers first (surely for some time the only working GL3 driver).

PaladinOfKaos
08-13-2008, 01:40 PM
NVIDIA drivers are at best 19 days away, at worst 48 days away, assuming they meet their projected September release. And supposedly AMD and Intel are going to discuss their driver plans at the BOF tonight.

And if you really want to code against a GL3-like interface, Mesa may already have the beginnings of that code in the repository, though I haven't looked yet.

Korval
08-13-2008, 01:40 PM
I kinda think that they were expecting to release OpenGL 3 to a big reception with nVidia waiting in the wings to release a driver next month.

Like I said, if they honestly believed that the GL community would be appreciative of the nothing that they've given us, then they're more incompetent than anyone could ever have believed.

Longs Peak was loved. People were practically frothing at the mouth over the very idea, let alone the almost-entirely-awesome execution based on the newsletters. Longs Peak was deeply desired by pretty much everyone.

If there was any doubt about what we were thinking, all they had to do was ask. Just one message saying, "We've been getting a lot of heat from CAD developers and the like about GL 3 being a complete rewrite. What would you think about doing a much slower rewrite over the next 5 years?" They wouldn't even have had to have a dialog: just a simple post and read the responses.

That 2 sentence message, posted sometime in January of this year, would have been enough to provide them with... well, this thread. Though it would almost certainly be a lot more civil, since they would be asking our opinion about a change that hadn't been made yet, rather than giving us the change and then asking whether it was a good idea or not.

dorbie
08-13-2008, 01:47 PM
Why should I care whether they're listening or not if they never do anything based on it?

Then why are you here? This is not the first time you've gone on a rant about OpenGL.

Redact the deprecated stuff in the spec and get agreement on one feature (perhaps two) and things could look very different. That is not beyond the realms of what is possible in the immediate future. Not everyone is as ready as you are to cut their nose off to spite their face.

The Khronos/ARB is a committee of competing and/or divergent commercial interests and voting rights are not determined by volumes shipped (perhaps they should be), and it tries to keep everyone on board (perhaps they shouldn't).

Perhaps the way to go about this is to appoint a binding technical arbitration committee for key contentious extensions with someone neutral and respected heading it like John Carmack, because extensions tend to get bogged down in the details without converging.

If the great and the good are reading consider that last suggestion in earnest.

Chris Lux
08-13-2008, 01:56 PM
In retrospect this post was the turning point:
http://www.opengl.org/discussion_boards/...2918#Post232918 (http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Main=45784&Number=232918#Post232918)

This has to be the meeting where long peaks was scrapped. The first posts after this one are from june...

Thats about it what we can see/guess from the outside. I really hope for some meeting notes from the BOF session to understand what went on there.

Korval
08-13-2008, 02:09 PM
I really hope for some meeting notes from the BOF session to understand what went on there.

I wouldn't hold my breath. The OpenGL ARB stopped giving out meeting notes long ago. Supposedly, it was to allow its members to be more willing to give out information.

Chris Lux
08-13-2008, 02:27 PM
I wouldn't hold my breath. The OpenGL ARB stopped giving out meeting notes long ago. Supposedly, it was to allow its members to be more willing to give out information.
Yes, but this is not an internal ARB meeting... The Q&A section is maybe the most interesting this time ;)

Korval
08-13-2008, 02:31 PM
Yes, but this is not an internal ARB meeting...

Oh, sorry, I misread that. I thought you were talking about the January meeting notes.

-NiCo-
08-13-2008, 02:35 PM
Oh my god, is this all we get after constantly refreshing our browser windows on this site for the past 9 months, hoping for some news about OpenGL 3?!

Oh well, guess it's time for me to move to D3D. Thanks for making my decision a lot easier, ARB.

pudman
08-13-2008, 03:09 PM
Man, what a difference this BoF will be from last year's BoF. I can't wait to see the results!

They could always pull the colossal "Just kidding! Here's the REAL spec!". I would have OpenGL babies if they did that.

bobvodka
08-13-2008, 03:18 PM
*pours pudman a tall glass of Reality Juice*

may or may not taste of fail ;)

Leadwerks
08-13-2008, 04:10 PM
Meanwhile back at the office, Khronos celebrates a job well done:
[Removed as being in-appropriate by webmaster]

They're not retarded; They're just glDisabled!

For those that are there in person, please give them hell. Don't let them sleaze out of this without calling them on their bullsh*t.

The only way I see out of this is if OpenGL is taken away from Khronos and given competent management.

Korval
08-13-2008, 04:18 PM
Meanwhile back at the office, Khronos celebrates a job well done:

I'm no fan of what's happened here (that's putting it mildly), but that's probably a little over the line.

This is probably more appropriate:

http://samuelpablo.files.wordpress.com/2008/01/epic_fail.jpg


The only way I see out of this is if OpenGL is taken away from Khronos and given competent management.

You keep talking about Khronos as though the Khronos ARB is not composed of the exact same people who were members of the ARB before Khronos took over governorship.

Khronos as a whole gets stuff done. The OpenGL ARB is disfunctional because they are disfunctional. Don't blame this debacle on Khronos.

knackered
08-13-2008, 04:36 PM
jesus leadwerks, saying f uck is one thing, but putting up pictures of disabled people is quite another.
I'm disgusted with myself that I stifled a laugh. I must need a holiday.

EvilOne
08-13-2008, 05:07 PM
The real problem with GL3 is, that I doesn't solve any of the problems we have with GL so far... What happend is just the common core promotion.

They promised a clean API that works nice on DX9 herdware. Yes, there is a large market share of DX9 hardware out there I don't want to miss. The problems of GL2.x persist - no predictable way to hit the fast path besides trial and error, no chance to query features, unpredictable driver behaviour, etc.

GLSL is as unusable as before.

The new API is nothing more than feature-mania. Promote to core, done! A better way would be an GL API, that was more DX9 centric, but with removed fixed function states. The new DX10/DX11 features were nice candidates for extensions in such an API. This is my main problem with the new API, just adding more features to an API doesn't solve anything. It adds just more and more interactions between core and extensions. Damn, even some kind of ARB_query_hardware_features extension added to GL2.x would be better - this would solve 90% of all the problems I had with GL.

The next thing are the extensions that are promoted to the core... Framebuffers? This makes the core more and more unstable, framebuffers never worked correctly (and yes nVidia, your implementation isn't correct too). Geometry shaders? For what? Seems like an ill fated stencil shadow generator. I have more CPU cores that I need - I can do any amplification faster on the CPU side.

Anyways... just my two cents about the API. But at the end of the day, this is just an academic discussion. On windows you have the DX9 path (with some D3DCAPS9 checking) and the DX10 path. On Apple you use AppleGL (somehow, the Apple extensions feel like D3D, nice nice nice). On consoles you have the console specific APIs anyways... Linux is just an opt-out for me.

Look what is happening now: Blizzard and id software get the vendor extensions to get their stuff running fast. Some GL3 backdoor for selected parties. I prospose the following extension.

Extension name:

EXT_isv_dependend

New tokens:

- VENDOR_BLIZZARD_EXT
- VENDOR_IDSOFTWARE_EXT

New functions:

- glSetVendor(GLenum vendor);

Interaction with core and extensions:

- None. Just a fast path for selected parties.


ARB or Khronos - or what the new name for the same old gang now is. Shame on you. Just big shame on you.

Korval
08-13-2008, 05:17 PM
Anyways... just my two cents about the API.

Since you mentioned geometry shaders being in GL "3.0", you obviously didn't read it because geometry shaders aren't in GL "3.0".

HenriH
08-13-2008, 05:57 PM
Meanwhile back at the office, Khronos celebrates a job well done:

Okay, this is absolutely the most ridiculous and immature kind of behavior that I have ever seen in these forums. Posting pictures of disabled people and constantly flaming is not really helping anything and is taking the conversation away from the subject itself. I feel truly disgusted, this is unacceptable way to debate and I hope the forum moderator will do something about this.

Korval
08-13-2008, 06:04 PM
Look what is happening now: Blizzard and id software get the vendor extensions to get their stuff running fast. Some GL3 backdoor for selected parties.

Um, what? I don't know why or how I'm defending OpenGL "3.0" here, but I see nothing that suggests anything of the kind. If you're going to attack this nonsense, then attack it for what it is and isn't, not due of paranoia about collusion between the ARB and certain game studios.

Simon Arbon
08-13-2008, 09:56 PM
don't underestimate the work actually involved loading from byte code (GL vs DX, how much time do you actually think is taken in parsing source to tokens [GL only] vs the rest of compile [DX and GL])Thats why i want the binary save/restore as well, thats where i expect to save compile time.
Each compiler fights the other, so SC undoes the ms optimizationsThe HLSL/Driver compiler fighting occurs because its bytecode is very low level, the bytecode i want is high level, ie. it still includes all the While/IF/For/Case structures.
The optimisations would just be simple stuff like rewriting
A := (B*C)+2*(8+(B*C))*(B*C); as T := B*C; A := T+2*(8+T)*T;
but i take your point that it doesn't really matter if this is left to the driver instead as its doing most of the optimisation anyway.

You can actually do that already, using TEXTURE_BASE_LEVEL and TEXTURE_MAX_LEVEL.

Can't you control the usage of mipmap levels by the glTexParameters GL_TEXTURE_BASE_LEVEL and GL_TEXTURE_MAX_LEVEL,
and stream rarely used levels via pbuffers?I have read everything i could find on SGIS_texture_lod extension, it looks like it should be possible to do the streaming, i will definately try this out to see if it works.
Although it looks like some drivers pre-allocate the memory for all levels and some dont, so it only saves GPU memory with particular vendors.
This should only be a problem if i have lots of textures on older cards with limited memory.

Korval
08-13-2008, 10:12 PM
Although it looks like some drivers pre-allocate the memory for all levels and some dont

If you're trying to do this to save performance (only upload some of a texture unless you need more), it may work. If you're trying to do it to save memory, give it up. As you point out, some drivers will allocate all the room necessary for the texture as a whole. But even drivers that don't will want the texture as a whole to be in one contiguous block of memory. So when you decide to add another level, it will have to copy that memory to a new location. And that's not good for fragmentation reasons.