OpenGL 2.0 news

NVIDIA developer site updated stuff concerning GDC and OpenGL, with stuff relating to OpenGL2 and GLSL.

It seams that no ubber buffer in GL2. :confused:

Theres also the Melody tool - LOD generation and Normal map creation all in one. Looks quite handy, although it only imports 3ds files at the moment.

Originally posted by KRONOS:
It seams that no ubber buffer in GL2. :confused:
Just read itā€¦ :frowning:
This doesnā€™t make sense. VBO and PBO are included and Superbuffer not?

Originally posted by KRONOS:
[b]NVIDIA developer site updated stuff concerning GDC and OpenGL, with stuff relating to OpenGL2 and GLSL.

It seams that no ubber buffer in GL2. :confused: [/b]
Hmmā€¦ what does that mean after all? GL2 will have PBOā€™s (and VBOā€™s), but not Ć¼ber-buffersā€¦ I thought PBOā€™s were part of Ć¼ber-buffers?

Somebody please explainā€¦cass maybe? :slight_smile:

GL2 will have PBOā€™s (and VBOā€™s), but not Ć¼ber-buffersā€¦ I thought PBOā€™s were part of Ć¼ber-buffers?
Effectively, PBO is a part of superbuffers, but only a small part.

Basically, what this means is that the ARB canā€™t decide on big extensions (like superbuffers), so theyā€™re giving us little pieces of them instead. This only helps prove what has been so evident the past few years: the ARB isnā€™t a very functional way to define graphics standards.

According to the 3Dlabs Shading Language seminar (good complete talk, but no real surprises), theyā€™ll be producing a spec at Siggraph 2004. Also, I expect to read some interesting stuff in the March ARB meeting. Apparently, they finished reviewing the minutes so it should be posted fairly soon.

Superbuffers (I donā€™t think it is really called uberbuffers anymore) has many pieces. It is not reasonable to expect that the whole thing gets accepted at once with unanimous vendor adoption. Weā€™ll probably only get something that abstracts memory on the card so youā€™ll get things like render to vertex array and render to slice of 3D texture and all the offscreen rendering buffers (accessible without context switch) you can shake a stick at. This doesnā€™t mean that itā€™ll necessarily provide fast, asynchronous pixel transfer like PBO.

The ARB is a great way to adopt big extensions. Would you rather a single vendor came up with something? Or have OpenGL just follow Direct3Dā€™s lead? Superbuffers has taken a long time because it is complicated and represents a big ā€œarchitectural churn.ā€ The ARB is slow and careful because it doesnā€™t want to paint itself into a corner with some half-baked extension that has the burden of support.

-Won

Donā€™t worryā€¦ We will have uber buffers. They just meant uber buffers wonā€™t be integrated in the core API, but theyā€™ll still be avialable as ARB extension.

The ARB is slow and careful because it doesnā€™t want to paint itself into a corner with some half-baked extension that has the burden of support.
Better to try and fail than to sit around and do nothing. If the extension turns out to be wrong-headed, it can always be replaced by another.

D3D may not have all the features implemented in the best possible way. But it does have them.

I like OpenGLā€™s progress when nVidia would just create new extensions, but write the spec in such a way that others could implement them to expose the hardware appropriately. Look at point sprites or occlusion querries. Yes, the ARB eventually got their say on it, but ATi adopted them long beforehand. And the same goes for ATIā€™s float texture extension (for NV40 hardware that can actually handle it).

This is how extensions should be created and adopted. Some vendor exposes hardware functionality, but writes the spec with an eye to someone else implementing it (or, without a propriatery nature to it). Then, they see how well that works, in terms of the quality of the extension (encapsulating and abstracting the hardware, etc). Eventually, a few others pick them up, and the ARB can promote the extension (perhaps with changes) to ARB status.

Something similar happened for VBO functionality. nVidia created VAR, but it wasnā€™t that good of an abstraction. A functional extension, but there were better ways to abstract the concept. ATi made VAO, which was a much nicer abstraction, but required a whole new interface for binding vertex pointers. The ARB took the two of them, merged the good parts, and created VBO. In the meantime, people were inconvienienced by having two different APIā€™s for two platforms, but the functionality was there, which was the most important thing.

Superbuffers should have followed this path, rather than langushing for years behind closed doors.

Korval, this is true. I certainly would have preferred some intermediate EXT or IHV solution for SuperBuffers, but isnā€™t this what weā€™ve been getting? Weā€™ve gotten pieces like NV_RENDER_TO_TEXTURE_RECTANGLE extensions.

Itā€™s not as if a single vendor was sitting on this capability for a long time and got hung up by ARB politics. It is a great deal of work to implement superbuffers since it entails a huge refactoring of lots of OpenGL into separate storage and function axes. This can be a pretty difficult software burden considering the hardware does not necessarily look like this right now. Do GPUs deal with frame buffers, textures and vertex arrays uniformly? Probably not. If the driver author has to make them all appear to be the same thing then that is alot of work. And no IHV is going to do it if they think they will have to do it again.

Granted, in the long run something like superbuffers is likely to simplify things on both ends. However, the ARB is also concerned about making a smooth transition.

The ARB is much faster than it used to be. Also, theyā€™re also rewriting their by-laws to be more IP friendly, so I would expect it to become even more efficient. I manage to forgive them for any slowness on their part.

-Won

Just to clarify, these were results from a strawpoll of arb members.

PBO is a simple logical extension of VBO, and it
is available in the latest NVIDIA drivers.
(At least for registered developersā€¦)

I donā€™t know the rationale behind the poll
results for uberbuffers.

I actually expected the ARB meeting notes to be on opengl.org by now, but I guess the web site troubles have delayed that. Once theyā€™re up,
you will be able to read the details for yourself.

Thanks -
Cass

Originally posted by KRONOS:
NVIDIA developer site updated stuff concerning GDC and OpenGL, with stuff relating to OpenGL2 and GLSL.
So, GL 1.6 has got renamed to ā€œGL 2.0ā€ ?? :eek:

From examining the stuff they are going to include, the overviewed API revision looks just as much incremental as were 1.3, 1.4 and 1.5. The only common things between the original OpenGL 2.0 and the ā€œOpenGL 2.0 Updateā€ are: GLSL included in the core and the ā€œ2.0ā€ name. The original GL 2.0 was quite a lot more than that.

This leads me to following guess:
The whole ā€œGL 2.0 initiativeā€ has essentially been killed, but since considerable hype occured in the past, they decided to reuse ā€œ2.0ā€ moniker, in order to prevent the failure message ā€œOGL 2.0 is deadā€ spread in the worldā€¦

(In anticipation of obvious reponse):
No, I donā€™t think it is true that ā€œall important stuff from the original OGL 2.0 is (or even will be) exposed in other forms anyway, so why careā€.

The whole ā€œGL 2.0 initiativeā€ has essentially been killed, but since considerable hype occured in the past, they decided to reuse ā€œ2.0ā€ moniker, in order to prevent the failure message ā€œOGL 2.0 is deadā€ spread in the worldā€¦
GL 2.0, in its original form, has effectively been dead for at least a good year, if not more. Just take a look at the ARB meeting minutes to see that it has, for all intents and purposes, gone nowhere.

Like I said earlier, the ARB has problems deciding on big stuff.

No, I donā€™t think it is true that ā€œall important stuff from the original OGL 2.0 is (or even will be) exposed in other forms anyway, so why careā€.
Iā€™m sorry you feel that way, but it is true. The most important of the 2.0 changes was glslang itself. Superbuffers and VBO represent the other functionality that was truly important. The rest would have been a nice API update, or other interesting features, but they are ultimately not necessary. Interesting perhaps, but not truly vital.

Originally posted by cass:

PBO is a simple logical extension of VBO, and it
is available in the latest NVIDIA drivers.
(At least for registered developersā€¦)

But you never posted the specs and actually, I canā€™t see it in the GLSL driversā€¦ :frowning:

Originally posted by Korval:
Iā€™m sorry you feel that way, but it is true. The most important of the 2.0 changes was glslang itself. Superbuffers and VBO represent the other functionality that was truly important. The rest would have been a nice API update, or other interesting features, but they are ultimately not necessary. Interesting perhaps, but not truly vital.
I considered vital the whole OpenGL 2.0 ā€œpureā€ thing. As it is the API is bloated, contains redundant and outdated functionality and has lost most of its simplicity.

yeah. the pure thing was a great thought. set a new standart, not add ontop of the old. thats what differes between gl and dx.

i would love to see a new opengl, having only what is today needed. that would simply mean, about no state changes anymore, a lot of the old pixel manipulation things dropped, only textured triangles with shaders in buffers left :smiley: more or less.

some sort of glslim

Originally posted by Zengar:
But you never posted the specs and actually, I canā€™t see it in the GLSL driversā€¦ :frowning:
Zengar,

Sorry - theyā€™ll be available Real Soon Now, but thereā€™s really not much more to them than the powerpoint slides state. Theyā€™re just like VBO, except for pixel transfers.

Thanks -
Cass

Originally posted by davepermen:
[b]i would love to see a new opengl, having only what is today needed. that would simply mean, about no state changes anymore, a lot of the old pixel manipulation things dropped, only textured triangles with shaders in buffers left :smiley: more or less.

some sort of glslim[/b]
#1 Iā€™d say it is more important to have a good shading language, and finally GLSL is here.
GLSL should offer a lot of features, even if itā€™s not in silicon.

#2 There are already a lot of interesting extensions, that arenā€™t core and arenā€™t offered by most vendors. Some of them are EXT, others are ARB!

#3 If experimental extensions are to be released, they should be available everywhere. Call it ARBX if you wish.

iā€™m not talkin about having all the new stuff, that is no big problem. iā€™m talking about dropping all the old stuff. how much pixel-storage thingies are still needed? how much use the index buffer, and other such things.

if you draw the line behind all you normally need with the features of the current gl (with all sort of extensions), you could strip off a lot of old things that got deprecated now.

Iā€™m very skeptical about dropping support for anything in OpenGL core.

Having a stable core has been one of the things that has kept OpenGL viable and attractive.

Keeping support for the old stuff costs less and less with each passing year in terms of hardware and driver support, and applications written a decade ago still ā€œjust workā€.

If we feel that OpenGL really needs a radical re-design, then it probably deserves a new name too. Calling it OpenGL 2.0 implies that OpenGL 1.x should die, and I donā€™t think the case has been made for that at all.

Originally posted by cass:
[b]Iā€™m very skeptical about dropping support for anything in OpenGL core.

Having a stable core has been one of the things that has kept OpenGL viable and attractive.

Keeping support for the old stuff costs less and less with each passing year in terms of hardware and driver support, and applications written a decade ago still ā€œjust workā€.

If we feel that OpenGL really needs a radical re-design, then it probably deserves a new name too. Calling it OpenGL 2.0 implies that OpenGL 1.x should die, and I donā€™t think the case has been made for that at all.[/b]
Thatā€™s what OpenGL 2.0 ā€œpureā€ was for. The original propositions suggested a full-fledged version which supported both 1.x and 2.0 functionality and a slimmed down version with only 2.0 functionality. That was supposed to make the transition from legacy code to OpenGL 2.0 code smooth.