ATi and nVidia working together to make OpenGL 3.0

According to this , ATi and nVidia are about to propose a radical change/non-change to OpenGL.

They want to build a lower-level API and then layer 2.x GL functionality on top of that. This has obvious advantages, as specified in the document.

Personally? I’m all for it. I didn’t know that the GL object model took up 3-5% of the driver’s time, so I’m happy to see it go.

And for those who need legacy functionality, it can be layered on top.

Firstly, why was the other thread on this subject closed by a moderator? If you think it’s in the wrong place, move it to where you think it should be, don’t just slam the door shut, it’s too important.
Now, on-topic:
Yes I agree it’s a great list of suggestions. Core OpenGL as it stands is no longer a true hardware abstraction, so it needs culling.

I really hope, this is going to be what GL 2.0 was supposed to become. I´m a bit afraid that the specification process will take too long and we won´t see anything conrete in the next two years… But this time they have the advantage not to be concerned about backwards-compatibility. Hopefully this will result in a modern and efficient API.

“Core OpenGL as it stands is no longer a true hardware abstraction, so it needs culling.”

What does that mean?

Originally posted by skynet:
[b] “Core OpenGL as it stands is no longer a true hardware abstraction, so it needs culling.”

What does that mean? [/b]
My understanding is that the OpenGL API is too far from the hardware now that it is all programmable.
It needs be cleaned up, rethought, restructured and simplified.

Ah, didn´t get the irony in it :wink:

There is definately quite a bit of cruft in the OpenGL API. An enema is not unreasonable.

Things that were a good idea in 1992 aren’t necessarily a good idea in 2006. And I kinda like the fact that the Kronos group will be taking over OpenGL. That way, it’s not bound to SGI in any real fashion.

It needs be cleaned up, rethought, restructured and simplified.
One thing I really like about their proposal is the fact that it is significantly based on making the OpenGL API easier for IHVs to implement.

For a long time, pretty much since its inception, OpenGL has been designed specifically around what developers want. Now, we’re seeing IHVs pushing for an OpenGL that makes their jobs easier. That in turn promotes better OpenGL implementations, which ultimately makes us, the developers, jobs easier. Which in turn makes everyone happy.

Personally? I’m all for it. I didn’t know that the GL object model took up 3-5% of the driver’s time, so I’m happy to see it go.
more than i guessed as well, my worry is this. “Easier to manipulate, but also easier to crash” i can see some beginners having problems.
on the whole though most of the changes look to be improvements.

I think we have all wanted to be “Lean and Mean”, in other words, only stay on the fast track.

My understanding is that the OpenGL API is too far from the hardware now that it is all programmable.
It needs be cleaned up, rethought, restructured and simplified.

Yes, and also there are parts that are really old. Weird functions like glRect() and glInterleavedArrays() should be killed off.
Functions like glFrustum, glOrtho, glCliPlane should have floats versions and the double version of functions should be killed off until such hw becomes available.

I agree there’s a lot of good stuff in there. But: The day display-lists and immediate-mode die, is the day I switch permanently to direct3d. Removing display-lists and adding geometry instancing is a definate step in the wrong direction :frowning: I see some of the items on this list as removal of sound opengl-functionality to ease driver-implementation: If Microsoft have decided you need to implement a feature for direct3d it’s easy to add a hook for gl too. If it’s not in d3d, you have to do actual work. I fear the best design will not result from a joint effort between ATi/Nvidia as they would see direct financial gain from limiting the differences between d3d/gl: Less differences mean less polarized driver-development.

I think the move to The Khronos group taking over OpenGL specification is by the the best move I’ve seen made in years. Once this technicallity is official, I believe we’ll start to see a far more rapid development cycle of the OpenGL specification.

As for OpenGL LM, all I have to say is “I can’t wait!” :slight_smile:

Originally posted by Mikkel Gjoel:
Removing display-lists and adding geometry instancing is a definate step in the wrong direction :frowning:
I am very interested in feedback on these points and would like to understand your concerns.

The goal is not to remove dlists as they exist today but to introduce a new mechanism which limits the scope to the parts that are actually useful and optimizable. For example today a legal dlist may contain: “Vertex(); Vertex(); End();” and may be called as “Begin(); Vertex(); CallList();”. There is nothing the driver can do to optimize this, its a simple macro recorder and playback mechanism. Further, COMPILE_AND_EXECUTE is an abomination which differs enough semantically from COMPILE followed by CallList() that the implementation needs to special case it - a lot of complexity and overhead that in practice, nobody really cares about.

As for immediate mode, nobody is proposing to remove it, rather to acknowledge the fact that its highly inefficient and difficult to optimize; its nice for prototyping but bad for performance. If the functionality remains available but layered, is that so bad? It wasnt fast to begin with, and never will be.

I see some of the items on this list as removal of sound opengl-functionality to ease driver-implementation:
Again, ease of implementation is only one of the overall goals. We’re acknowledging the fact that providing an abstraction which no longer looks like the hardware does no favors for either developers or the implementors. The thicker the layer between the app and the hardware, the more CPU overhead and the more likelihood of driver bugs. Exposing the fast paths is meant to be a win-win.

Remember, back in the old days OpenGL was intended as the thinnest possible abstraction over hardware to allow portable graphics applications; it was never meant to be a utility library. The more hardware has changed, the thicker this abstraction has become. We’re trying to get back to basics.

Originally posted by zed:
more than i guessed as well, my worry is this. “Easier to manipulate, but also easier to crash” i can see some beginners having problems.
The proposal also includes a debugging mode built into the implementation which can provide the “idiot proofing” names give you but can also be disabled for maximum performance. Think of it like asserts; you want them to catch errors during development but you don’t want the runtime overhead once you ship a release.

I see some of the items on this list as removal of sound opengl-functionality to ease driver-implementation
As I understood it, nothing is really on a “removal” list. These things will not be in OpenGL LM, but they will still be in the full profile. The difference is that the driver writers don’t have to be concerned with the implementation, because it’s implemented on a layer between the application and the driver.

One question comes to mind there:

Who will write this “full OpenGL on top of OpenGL LM” implementation? As I understood it, this would be something that could be done hardware-independant.

Also, I sense the opportunity to get a new “opengl3.dll” on windows (one can still dream :stuck_out_tongue: ).

To add to what I said, I was really waiting for OpenGL 2.0 “Pure”, a simplified consolidated OpenGL Layer that unfortunately never came.
It seems to me that OpenGL 3.0 is “just” that.

Also, simplifying the life of IHVs drivers team is something I’m all for, as long as it’s not at the expense of the programmers…

For a moment I did wonder whether OpenGL 2.0 ES would become the standard, as it’s already a cleaned up subset of OpenGL with most (but not all) required capabilities…

Originally posted by Ingenu:
Also, simplifying the life of IHVs drivers team is something I’m all for, as long as it’s not at the expense of the programmers…
I’m all for it even if it is at the expense of the programmers. Convenience can be layered on top, either by the programmer or through utility libs. Driver bugfixes, by and large, can’t.

Two miscellaneous thoughts:

  1. What’s the current status of the legal agreement that precluded anyone but Microsoft from shipping opengl32.lib/.dll for Windows? If it still holds, and if the price of sidestepping that agreement is a name change, this is probably as good a point to do it as any. I’ve heard from several OpenGL toe-dippers that having to use GL2 as 1.1+extensions is severely offputting.

  2. The column that started all this mentions in the first para that “Unfortunately I was only able to attend the first half”. Does anyone know if there was more juicy news in the second half, and if so, could they post details?

I never understood why it should be legal to use a texture id, that you didn’t generate previously. What did they smoke back then to allow such a complicated but completely useless “feature”?

Also, in some cases there is too much flexibility. For example i was always thinking, if there is a most hardware-friendly layout for vertex-arrays. Meaning, if it does matter to the hardware in which order i store my position/color/texcoord/… data. Maybe it doesn’t matter. But if it does, i want OpenGL to encapsulate it, so that i just say “here is the data, store it in the format you like best”. Just as textures are stored in the way the hardware likes best.

In some cases flexibility is nice. For example i really wouldn’t want to work without immediate mode. It’s just so useful to be able to simply render a few quads, even if it’s the slowest way possible.

I use display lists only for rendering text. And i think they make life very easy in this case. However, in general i think display lists could be layered.

However, in many cases flexibility makes life more complicated. It’s nice to do something the way you like best. But the moment you try to do it most optimal, you have a problem, because you don’t know, which is the most optimal way (and for which hardware??). So, in my oppinion, when a feature is performance critical, it should be very strict about how to use it, so that the driver only needs to handle and optimize ONE way, not 10.

I think OpenGL is very user-friendly and easy to learn. At least the basics. With all the extensions, many different ways to achieve the same thing and too much flexibility, the more advanced stuff is actually quite complicated. I think a “Lean and Mean” OpenGL would not only simplify and speed up drivers, but also make it easier for everybody to get to the fun-stuff.

And PLEASE, do it fast! OpenGL is dying a slow death for a long time now. We really need this.

Jan.

Display lists may be a bit difficult to implement, but I think the very general mechanism offers the potential for future driver optimizations. I think they should stay.

Other things like immediate mode or the fixed vertex and pixel pipelines should be layered on top. They are nice for simple applications and learning OpenGL.

Philipp

Display lists may be a bit difficult to implement, but I think the very general mechanism offers the potential for future driver optimizations.

While I am not driver writter I would assume that the display lists are big pain in the ass to get correctly not even optimize them and they complicate lot of other code. They are too general for their own good. Because of this complexity they can not be easily represented as copy of command buffer send to the hw so anything ever optimized imho were simple geometry only lists and for this reason people mostly used them to store the geometry only.

Display lists are not particularly difficult to implement, but they are apparently easy to misuse. They should stay, and will in this proposal, but they may be redesigned to preclude some of the existing pitfalls.

Display lists are brilliant.
They offer the driver the opportunity to remap the vertex/index data into an implementation dependent optimal format, stitch together tristrips, convert int32’s to int16’s, re-sort vertices into cache coherent order, calculate bounding volumes for potential better-than-application hardware culling etc.etc.
The NVidia quadro display lister is absolutely brilliant - I can’t get near the same performance by trying to optimize the user specified data myself, in the same kind of time frame.
Fair enough, limit its functionality by removing nesting, transforms, material changes or whatever, but leave the geometry stuff, because some IHV seem to do a really good job with it.