PDA

View Full Version : OpenGL 3 Updates



Pages : 1 2 3 4 5 6 [7]

skynet
08-08-2008, 06:28 PM
@Leadwerks:
Then, why don't you just start already? You could haven written your own fully-customizable-jaw-dropping-rendering algorithm ages ago.

bobvodka
08-08-2008, 06:52 PM
ATI has a SDK for OpenGL? Wow when did that happen?

Some time ago, it's just not as full as NV's one for obvious reasons. It's all part of their general SDK.

Brianj
08-08-2008, 10:50 PM
One of the first OpenGL 3 books. http://www.amazon.com/Beginning-OpenGL-2...18260451&sr=1-1 (http://www.amazon.com/Beginning-OpenGL-2E-Luke-Benstead/dp/159863528X/ref=sr_1_1?ie=UTF8&s=books&qid=1218260451&sr=1-1) . The wait for the book is going to be as hard as the wait for the API.

Korval
08-08-2008, 10:59 PM
Whoa. The fact that this book exists enough to have a preorder suggests that the ARB might actually have their stuff together this time. We could eve see a real SDK and everything.

Not that I'm holding my breath...

Anyway, while book taglines aren't always truthful: "Covering OpenGL 3.0, the new and more efficient API that provides Direct3D 10 level graphics and is platform independent."

That seems to support the hypothesis that Longs Peak has been properly folded into 3.0 proper.

lrucc
08-09-2008, 06:50 AM
Will OpenGL 3.0 use the C89 or C99 standard? I know they are compatable but just wanted to know.

bobvodka
08-09-2008, 06:50 AM
hmmmm, afaik Luke Benstead aka Kazade isn't linked to the ARB in anyway so I'm intriged as to how he would have got access to the spec in order to get a book out by Feb of next year..

V-man
08-09-2008, 07:21 AM
ATI has a SDK for OpenGL? Wow when did that happen?

Some time ago, it's just not as full as NV's one for obvious reasons. It's all part of their general SDK.

What is their latest OpenGL demo? It seemed like they haven't written anything new in 3 years.

Leadwerks
08-09-2008, 12:34 PM
Because I don't know the specifics of Larabee yet. It has some special hardware for texture caching and lookups, and I wouldn't be surprised if they did something similar for vertex buffers.

But when I have time I'll talk to Intel and see if I can get an NDA for it.

returnofjdub
08-09-2008, 05:34 PM
I've scanned this thread so far and nobody's mentioned this so I guess I will.

Has anyone seen Carmack's Quakecon 08 keynote? Toward the end someone asked him about the state of OpenGL 3.0, and basically he said that they scrapped the whole idea of doing a full-on revamp of the API, because a lot of the Khronos ARB member companies didn't want to throw away all the legacy they'd invested in the old API to bring up an entirely new one. If, in fact, this is true, and I'm sure it is, as Carmack is a good source, am I the only one who feels more than a little bit hosed by all this? Promising a clean, modern API, remaining bone chillingly silent for a year, then "oh wait, just kidding"? Unless Carmack was wrong or lying, at this point I'm sure that whatever the ARB intends to deliver will do nothing but massively disappoint. Oh well, 5 days to find out.

bobvodka
08-09-2008, 06:32 PM
I don't think Carmack would outright lie, however there is a chance he could be mistaken. Although I had already passed on a report in this thread about what we knew about GL3.0 not being what GL3.0 is now.

My problem with his statement is that OpenGL, as it stands, has moved away from the hardware, so while they may have legacy code that code is taking more and more effort to write as well as costing performance. So, unless they are prepared to take that hit and effectively watch D3D ride off into the sunset performance wise, I dont' see it as logical.

That said I'm assuming logic from the ARB members which could be a flaw.

If we don't get something largely new from the ARB then they have effectively let OpenGL sit stagnant for two years while D3D continues to improve (I went to the XNA Game Fest in london on wednesday and attended a talk on D3D11 which looks to be a good superset of D3D10, for the most apart works on D3D10 hardware and exposes a telleslator for D3D11 hardware), a situation which pretty much rings the death knell for OpenGL as a viable game platform on Windows as who wants to work with an API where the ARB divvers for 2 years without doing anything and with that MS won't have killed OpenGL, the ARB will have.

Of course, all that said, I'm not convinced Carmack is correct, but I guess in a few days we'll either be celerbating or burning effigies of the ARB members...

returnofjdub
08-09-2008, 08:01 PM
He said he didn't agree with the direction of maintaining the current API and building upon it either, he was just reporting what the current state of GL 3.0 is.

I honestly hope he is mistaken. One thing he did mention, however, is the promising prospect in OpenGL ES. ES is, in a lot of ways, what many of us are looking for in GL 3.0 (a cleaned up, modernized version of the API). ES was used as a basis for the PS3 graphics API, so perhaps it's possible for ES to be the basis for a new desktop API. Unlike the desktop OpenGL standard, ES is royalty free, so if GL 3.0 genuinely disappoints, I think an open source project implementing OpenGL ES 2.0 for maybe Intel GPUs on Linux or something could be a cool direction to take. Such a project would be Herculean in nature but if we could do such a thing, and implementations could eventually make it over to proprietery drivers, we could have an open reference implementation for a good, clean, modern, open rendering API.

(yes, I'm a dreamer)

bobvodka
08-09-2008, 08:17 PM
From what I've seen of it ES still has too much legacy with GL2.x and the way it does things; with the evolution of the mobile platform even ES might need a GL3.0 style revamp sooner rather than later to stay in step with the evolution of the hardware (which was a major point of the GL3.0 rewrite, the fact the programming model was drifting more and more from the hardware).

Btw, afaik the PS3 graphics API is an evolution of the PS2 API and while OpenGL|ES is on the platform no one really uses it as the native API is faster.

Korval
08-09-2008, 08:27 PM
he said that they scrapped the whole idea of doing a full-on revamp of the API, because a lot of the Khronos ARB member companies didn't want to throw away all the legacy they'd invested in the old API to bring up an entirely new one.

The fundamental problem with this is that the whole GL 3.0 effort was started because ATi and nVidia wanted a new API. It makes no sense for them to renege on it.

If this is true, then it means that the ARB will have failed yet again to accomplish something. If all they've been doing for the last year is taking extensions that already exist and promoting them to the GL core, I will be very put out.

returnofjdub
08-09-2008, 10:12 PM
The fundamental problem with this is that the whole GL 3.0 effort was started because ATi and nVidia wanted a new API. It makes no sense for them to renege on it.


In the keynote he didn't name names. He said "there was an effort" to modernize the API, and "some parties" involved didn't want to get rid of the codebases they'd already invested so much in at this point. It could be that the ATI/nVidia reps lost the battle to some other clique in the ARB, I dunno... but yeah, he didn't name names.

Korval
08-10-2008, 01:05 AM
It could be that the ATI/nVidia reps lost the battle to some other clique in the ARB, I dunno... but yeah, he didn't name names.

That doesn't make any form of sense. Intel has no pre-existing OpenGL codebase (well, not one of merit); and that codebase would have to be rewritten from square one anyway for Larrabee. And there's nobody else with enough power in the ARB to actually deny what nVidia and ATi want...

Except Apple.

Who made their OpenGL implementation the way Microsoft wrote their D3D one: IHVs write to an internal API and Apple fills in the blanks. And Apple has a lot of code invested in OpenGL 2.x; after all, it is the underlying rendering API for all of OSX's rendering. And Apple would have the clout to force through their decision; OSX is the only platform where OpenGL has some real meaning.

If this comes to pass, I blame Apple. They are the only ones to gain from preserving GL 2.x. And I will never buy another Apple product again (not that it means much; I've never bought Apple products).

And speaking of Apple, Blizzard would be supportive of this move. They've got StarCraft 2 coming up, likely for an early 2009 release. And they certainly wouldn't want to have to take their GL 2.x path and rewrite it, not at this stage. And to be frank, Blizzard's opinion matters infinitely more than Id's at this stage; if SC2 is as successful as SC1, it will be in widespread play by millions for 10 years. Id can't boast an engine (or a game, for that matter) that lasts even 5 years. Indeed, if SC2 isn't on board with a GL 3.0 rewrite, IHVs will have no choice but to support GL 2.x for, well, pretty much ever. SC2 has the potential to last a long time.

Rob Barris, if this comes to pass, you've got some splainin' to do.

Now, that all being said, there is one, precisely one way that this could be true and everything be OK.

GL 3.0 was said to have some kind of minimal backwards-compatibility API. That you could allocate certain GL 3 objects and wrap them in a GL 2.x object to use them with GL 2.x commands. The "new" GL 3.0 could be a dramatic extension of that idea. That is, it provides the new object model and possibly the new context. It allows you to create lower-level objects, but still lets you wrap them in 2.x-style objects.

Anything less is entirely unacceptable.

knackered
08-10-2008, 01:21 AM
i don't see how this could be economically the right thing to do. It would be cheaper to knock up an API that maps well onto D3d10, rather than continue to hack around with a decades old, bloated API like the current GL. They have sort-of working GL2.x drivers now, so just freeze them. Nothing's wasted.
Nah, it wouldn't make sense for what carmack says to be true.

Korval
08-10-2008, 01:55 AM
Nah, it wouldn't make sense for what carmack says to be true.

Unfortunately, the ARB and "what makes sense" are only nodding acquaintances. It certainly doesn't make sense, but that doesn't mean it isn't true.

I don't accept it, but I certainly accept the possibility of it.

Mars_999
08-10-2008, 02:44 AM
I am thinking they are correct, and that the GL3.0 will not have a new OO API like everyone was hoping for, after looking at the new book that was posted it doesn't ever cover anything about the new API, from the looks of it. Oh well as long as DX10 extensions are in GL3.0 as a base line I am happy, GL4 can be the rewrite we all want. ;)

Jan
08-10-2008, 03:11 AM
"GL4 can be the rewrite we all want"

No, GL2 was meant to be that rewrite, if they don't get it right this time, i'm going to switch to D3D10 for good. I'm just getting used to Vista. OpenGL was a mess 5 years ago, and i bear with it as long as i could. If GL3 is not the promised rewrite and drivers are not out soon, the ARB can shoot itself.

If Apple indeed interfered in such a way with GL3, they are stupid, because they would only profit from a clean new basis. No one forces them to convert all their existing stuff to GL3 immediately.

About Blizzard: Rewriting a game-renderer to use GL3 instead of GL2 would take maybe a month for one man, let it be two. The time saved in the future, because of more stable and faster drivers, and a clean API, that one doesn't need to know for 5 years to know most of the dirty corners, allows them to write new code and maintain old code much better and at much lower costs.

I don't see any reason, why anyone would want to stay with the old API, there are no benefits in the long run.

Jan.

knackered
08-10-2008, 03:20 AM
meanwhile, all us other poor buggers with no choice but to stick with the professional-level features only offered by GL are well and truly shafted.
Nice one. I am so looking forward to the next 5 years of graphics development - not.
It seems it might be more productive to push Microsoft to add stereo and genlocking support to d3d11.

Heron
08-10-2008, 05:40 AM
Please give us the OGL3,we have been waiting for it a long long long time.

Chris Lux
08-10-2008, 06:09 AM
is there some valid reference to carmacks response?

i watched all keynote videos from the 2008 quakecon and there was no question to OpenGL 3.0 i could find.

V-man
08-10-2008, 09:39 AM
3 days to go!
http://www.khronos.org/news/events/detail/siggraph_2008_los_angeles_california

Rob Barris
08-10-2008, 11:54 AM
I'm fourth in line at the BOF - be happy to chat there in person or afterwards online.

http://www.khronos.org/news/events/detail/siggraph_2008_los_angeles_california/

I've been reading back through the thread and doing a rough tabulation of the desires voiced, about things people want to be able to do with their software using OpenGL; it's probably no surprise that you can't please all of the people all of the time.

edit, I see V-man posted the same link, forgive the redundancy :|

knackered
08-10-2008, 11:54 AM
the list of things to be covered in GL3 is too complete for just a shed load of extension-to-core promotions. I remain optimistic this is the re-write we want.

Brolingstanz
08-10-2008, 02:24 PM
When I read "GLSL 1.3 new features New Shader language definition", I darn near soiled my linens.

Jan
08-10-2008, 02:35 PM
I cannot find any video, where Carmack answers questions, either. Can anyone post a link?

Rob Barris
08-10-2008, 03:14 PM
Considering SIGGRAPH opens this week, I'd suspect you will have more concrete information tomorrow morning than you might find by googling.

Jan
08-10-2008, 03:42 PM
Problem 1) I'm not near any PC the next week (worst timing ever, first "vacation" in years)
Problem 2) It's not only his comments about gl3, that i am interested in.
Problem 3) If someone claims that Carmack said such a thing, i want proof for it, maybe his words were simply interpreted wrong by the one who posted about it.

Jan.

bobvodka
08-10-2008, 05:00 PM
While I don't have the link handy and i'm bowing out to sleep in a moment I have 'seen' bits of a video of the talk which included questions. Below the video was a summery of the talk and what he answered questions on; no mention of OpenGL3.0 in that summery at all.

Korval
08-10-2008, 06:48 PM
Until the random guy who appeared yesterday with this information shows up with a link to something, I'm calling shenanigans on the whole thing.

returnofjdub
08-10-2008, 11:34 PM
http://www.enemyterritory.tv/audio/data/qcon2008_et.tv_keynote.mp3

2hr 12min 15sec into it, audience member asks about the current state of OpenGL, Carmack replies:

"Yeah, so that's a little bit sad, the way that whole situation has gone. I understand the way it all happened and I can't be too upset about it, but there was a move going to modernize the OpenGL API. It really has a lot of cruft on it right now. I'm quite frank in saying that the DX9+ class Microsoft APIs are a lot better thought out and more consistent than the current state of OpenGL, which has selectors and all these extensions and things that just aren't really clean and nice, and there was a move to go ahead and bring it up to par... [inaudible, loud squeaking door] some directions were perhaps mis-steps, but some things were looking pretty good on there. But, as I understand, what happened was some of the companies on the ARB made the justifiable statement that the major codebases for CAD developers, the 10 million line codebases, the 20 years of ancient history, they're never going to convert to a radically new API. Making a new OpenGL that's not remotely backwards compatible just isn't going to fly, isn't worth doing. And, I disagree with that stance, but not enough to make a scene about it. I do think that there is a need for a non-Microsoft, open, cross platform API that's modern-- we can still do everything we need to with the current OpenGL, it's just kind of a mess. That's not the argument you want to be making. You want to say 'We're using this because it's clean and elegant, and it's better than the alternative,' and that's not really the case..."

He goes on to talk about the benefit of extensions which current OpenGL has over D3D in terms of experimenting with cutting edge hardware. He also talks briefly about OpenGL ES.

*shrug* don't shoot the messenger.

CrazyButcher
08-11-2008, 01:18 AM
that really sucks... it's particularly sad as imo those CAD guys could just as well live with current GL, and not worry about the future.
Also what about wrapping old GL to a new api, maybe not super speedy, but still, why block the future... that makes no sense and is completely shortsighted.
It's not like with gl3.0 the old GL would be dead and drivers removed within short time.

sadly this does explain the "year in silence" very well. The vendors wouldnt want to state in public how their "big budget" industry partners blocked the future, when in age of console gaming and Windows dominance, the "game/effect" oriented use of GL doesnt have that kind of gravity/relevance.

Roderic (Ingenu)
08-11-2008, 01:57 AM
Pretty much what I heard too, a very long while ago.
We (video games makers) are not the center of the world, and so OpenGL won't change to our liking just because we want it to...

It's pretty clear that OpenGL 3.0 will not be what was written in the pipeline newsletters.

Don't we have a cleaned-up OpenGL API somewhere already ? ;)

bobvodka
08-11-2008, 02:50 AM
Well then, if this all comes to pass I'll be finally off then. Having used D3D10 and experianced what a clean API can do I don't want to go back to the mess which is OpenGL. GL3.0 with a nice clean API would have been tempting, but if it comes to pass that the GL2.x style API is hanging around then I'll be seeing you.

nosmileface
08-11-2008, 06:07 AM
http://opengl.org/registry/doc/glspec30.20080811.pdf
:)

PkK
08-11-2008, 06:16 AM
At first sight it looks a bit messy. All the old stuff is still in there including glBegin() - glEnd(), though some of it is called "deprecated"; instead of the 16-bit half precision floating point format I expected there now are 16 bit, 11 bit and 10 bit formats.

This doesn't really look different from earlier revisions. Some extensions have been promoted to core, some tokens renamed, I think this should have been called OpenGL 2.2 instead.

PkK
08-11-2008, 06:20 AM
We (video games makers) are not the center of the world, and so OpenGL won't change to our liking just because we want it to...


You are. The major 3D graphics API is written for your needs, not to those of the CAD vendors.
However the ARB just doesn't realize the importance of games. When there are no games for a 3D graphics API hardware vendors put less effort into drivers. The API dies. Even CAD vendors will move on to the API better supported by hardware vendors. Since they want their application to work they might even start by using a GL->D3D wrapper, to they can use their GL codebase with the working drivers (which probably will no longer be there for GL in a foreseeable future).

Philipp

knackered
08-11-2008, 06:24 AM
august the 11th 2008. The day Microsoft won the 3d API battle.
How f ucking depressing.
I feel like I've been left to clean up after a party that got out of control. Scraping vomit off the dvd player, while everyone else goes out for Starbucks and blow jobs.
Enjoy your blow jobs guys, I am literally stuck here with no other option.
And I do CAD.

MZ
08-11-2008, 06:29 AM
I think this should have been called OpenGL 2.2 instead.Just like GL 1.6 was renamed to 2.0, and the real 2.0 was scrapped.

Eddy Luten
08-11-2008, 06:51 AM
Where the hell are my objects (http://scriptionary.com/blog/2008/05/15/why-opengl-30-is-important/)?

This is crap, nothing like what was promised.

For those who don't feel like digging through the spec, OpenGL 3.0 Equals:
API support for the new texture lookup, texture format, and integer and unsigned integer capabilities of the OpenGL Shading Language 1.30 specification (GL EXT gpu shader4). Conditional rendering (GL NV conditional render). Fine control over mapping buffer subranges into client space and flushing modified data. Floating-point color and depth internal formats for textures and renderbuffers (GL ARB color buffer float, GL NV depth buffer float, 455 N.2. DEPRECATION MODEL 456 GL ARB texture float, GL EXT packed float, and GL EXT texture shared exponent). Framebuffer objects (GL EXT framebuffer object). Half-float (16-bit) vertex array and pixel data formats
(GL NV half float and GL ARB half float pixel). Multisample stretch blit functionality (GL EXT framebuffer multisample and GL EXT framebuffer blit). Non-normalized integer color internal formats for textures and renderbuffers (GL EXT texture integer). One- and two-dimensional layered texture targets
(GL EXT texture array). Packed depth/stencil internal formats for combined depth+stencil textures and renderbuffers (GL EXT packed depth stencil). Per-color-attachment blend enables and color writemasks
(GL EXT draw buffers2). RGTC specific internal compressed formats (GL EXT texture compression rgtc). Single- and double-channel (R and RG) internal formats for textures and renderbuffers. Transform feedback (GL EXT transform feedback). Vertex array objects (GL APPLE vertex array object). sRGB framebuffer mode (GL EXT framebuffer sRGB)
Plus deprecation of older features.

Zengar
08-11-2008, 06:59 AM
This is an outrage... I am really pissed... two years of hopes so easily destroyed :( I only hope they stop mocking us and rename the spec to GL 2.2, as it should be...

ARB, you made a big mistake...

EDIT: ok, this was a first reaction, as I see they did change lots of stuff and I have to read the spec to understand the deprecation model and profiles and stuffs. Maybe it is all not so bad afterwards... but dropping the objects sucks

EDIT EDIT: ok, it is bad...

bobvodka
08-11-2008, 07:05 AM
august the 11th 2008. The day Microsoft won the 3d API battle.
How f ucking depressing.
I feel like I've been left to clean up after a party that got out of control. Scraping vomit off the dvd player, while everyone else goes out for Starbucks and blow jobs.
Enjoy your blow jobs guys, I am literally stuck here with no other option.
And I do CAD.

sorry dude, I really am :(

dletozeun
08-11-2008, 07:13 AM
I am looking for the GL_ARB_this_is_a_f**king_joke extension promotion is their specification... can't find it...

Hampel
08-11-2008, 07:16 AM
oh, must have slept half a year: must be April 1st...

pudman
08-11-2008, 07:21 AM
So it takes a whole year (of silence) to incorporate extensions into the core. Wow.

bobvodka
08-11-2008, 07:25 AM
Well, who can blame them for keeping silent when the end result was this crap? Hell, _I_ wouldn't have said anything either...

cass
08-11-2008, 07:32 AM
I think the new deprecation model is a critically important move in the right direction. If the ARB will rev the API more than once over the next year to get rid of more cruft, fix selectors, and introduce a new object model, that would be a really good thing.

We are approaching the end of the road with the conventional API models now though. On at least one next generation platform, software developers will be able to write their own GL and make it as small and efficient as they like. I think this is an incredibly exciting and empowering change for graphics software developers.

Carl Jokl
08-11-2008, 07:36 AM
I am sorry. I suspect that this is my fault.

skynet
08-11-2008, 07:42 AM
They shot themselves in the foot by layering all that 3.0 stuff on top of 2.1. How is that going to make a driver developer's life easier?!?

dor00
08-11-2008, 07:49 AM
Where the hell are my objects (http://scriptionary.com/blog/2008/05/15/why-opengl-30-is-important/)?

This is crap, nothing like what was promised.

For those who don't feel like digging through the spec, OpenGL 3.0 Equals:
API support for the new texture lookup, texture format, and integer and unsigned integer capabilities of the OpenGL Shading Language 1.30 specification (GL EXT gpu shader4). Conditional rendering (GL NV conditional render). Fine control over mapping buffer subranges into client space and flushing modified data. Floating-point color and depth internal formats for textures and renderbuffers (GL ARB color buffer float, GL NV depth buffer float, 455 N.2. DEPRECATION MODEL 456 GL ARB texture float, GL EXT packed float, and GL EXT texture shared exponent). Framebuffer objects (GL EXT framebuffer object). Half-float (16-bit) vertex array and pixel data formats
(GL NV half float and GL ARB half float pixel). Multisample stretch blit functionality (GL EXT framebuffer multisample and GL EXT framebuffer blit). Non-normalized integer color internal formats for textures and renderbuffers (GL EXT texture integer). One- and two-dimensional layered texture targets
(GL EXT texture array). Packed depth/stencil internal formats for combined depth+stencil textures and renderbuffers (GL EXT packed depth stencil). Per-color-attachment blend enables and color writemasks
(GL EXT draw buffers2). RGTC specific internal compressed formats (GL EXT texture compression rgtc). Single- and double-channel (R and RG) internal formats for textures and renderbuffers. Transform feedback (GL EXT transform feedback). Vertex array objects (GL APPLE vertex array object). sRGB framebuffer mode (GL EXT framebuffer sRGB)
Plus deprecation of older features.

I am readying the document also... i have no words, thanks for f uking up my day.

Worst revision ever, why is called 3.0 ??????

MZ
08-11-2008, 07:51 AM
http://opengl.org/registry/doc/glspec30.20080811.pdf
:(
How about a link to glsl 1.3 spec too?

skynet
08-11-2008, 07:54 AM
Its there, just look
http://www.opengl.org/registry/doc/GLSLangSpec.Full.1.30.08.pdf

Rob Barris
08-11-2008, 07:57 AM
The 1.3 language spec is up too.

http://www.opengl.org/registry/doc/GLSLangSpec.Full.1.30.08.pdf

As are the new ARB extensions for 2.x, and a few for 3.x (representing capabilities that did not yet make it into core spec).

http://www.opengl.org/registry/
(starting around #47)

pudman
08-11-2008, 08:10 AM
...and introduce a new object model, that would be a really good thing.

Man, how many years have they been talking about the new object model?

I can see how important the deprecation model will be, but OpenGL3 without a hint of an actual good use of this is lame. So I was slightly inaccurate: We got extensions moved into the core and a bunch of things deprecated. This will only be useful when they finally come out with the "new" features they've been talking about for years.

This really should have been GL2.2 with GL3 breaking compatibility (sorry, "removing deprecated features"). I guess we'll wait another year for some more quality inaction.

Roderic (Ingenu)
08-11-2008, 08:15 AM
OpenGL is DEAD !
Long live OpenGL|ES !

Ok, now that we all know OpenGL 3.0 is just OpenGL|ES 2.0 with tweaks, could we have a CLEAN spec for GL3.0 ALONE ?

(Please get rid of all the junk of now deprecated features so we have an easy to read, straight to the point specification.)

What about our 'querying' mechanism to know what hardware formats are supported ? (Did I miss it or is it lacking ?)

Eddy Luten
08-11-2008, 08:22 AM
I guess we'll wait another year for some more quality inaction.You think? Don't make me laugh. I've been waiting for this moment for a long time so that I can push primary development into a certain direction. This direction as of today will primarily be Direct3D 10.x for rendering, I'm not even considering OpenGL as an option at this moment like so many companies (browse this thread).

Maybe it sounds disgruntled and that's because I am. But after promise after promise and fruitless talks with Khronos' online marketing guy, I am very sick of waiting. And that other year, pudman, will not be waited for since it's not financially feasible (plus I don't think my nerves could take it) ;)

HAL-10K
08-11-2008, 08:22 AM
I think the new deprecation model is a critically important move in the right direction. If the ARB will rev the API more than once over the next year to get rid of more cruft, fix selectors, and introduce a new object model, that would be a really good thing.The competition (D3D) already delivered this six(!) years ago.
For our next project, we will most likely drop MacOS support because another reason was delivered with this.


We are approaching the end of the road with the conventional API models now though. On at least one next generation platform, software developers will be able to write their own GL and make it as small and efficient as they like. I think this is an incredibly exciting and empowering change for graphics software developers.Maybe for a company with a high seven digit revenue.

The resources we spend on pretty basic CPU multi-threading alone are border-line. How in hell should we be able to write our own vector processing interface then?!

Chris Lux
08-11-2008, 08:23 AM
this can not be happening again.... F*CK

noncopyable
08-11-2008, 08:24 AM
Heh, two nice news in a day.
Got a call from police which will most likely ends up mine being jailed for a few years without doing nothing, and a few hours later this funny spec.
They may look different but, the moral and the origion is ridiculously same.

Life got sense of humour, no? :)

knackered
08-11-2008, 08:30 AM
no binary blobs, no index offsets.....I think it's fair to say the ARB don't give a rats arse about the OpenGL community as a whole - just the small number of incompetent companies that struck it lucky in the mid-90's and decorated the CAD market with flecks of unmaintainable vomit masquerading as 'design tools'.

Hampel
08-11-2008, 08:31 AM
Hi noncopyable! Could you talk about this police call? Seems to be more interesting than the OGL3.0 thingy...

Mark Shaxted
08-11-2008, 08:34 AM
Hi noncopyable! Could you talk about this police call? Seems to be more interesting than the OGL3.0 thingy...

OpenAxeMurderer?
GLPsycho?

But anyway... Where are my volatile textures? Doh!

cass
08-11-2008, 08:35 AM
Maybe for a company with a high seven digit revenue.

The resources we spend on pretty basic CPU multi-threading alone are border-line. How in hell should we be able to write our own vector processing interface then?!

The key difference is that you currently require your OpenGL driver to come from one company as a gift at the time of their choosing with the features of their choosing. They in turn require all Khronos OpenGL ARB members to agree on what the core spec includes, though they can extend it at some non-zero cost to themselves for some unspecified gain.

Not everybody will be able to make their own GL implementation for obvious technical and fiscal reasons, but opening up the market to 3rd party implementations is absolutely better than your options today. If this doesn't feel empowering as a software developer, there's no reasoning with you. ;)

knackered
08-11-2008, 08:43 AM
meanwhile back in the real world....I'll let you know how my new mouse driver's coming along, as I'm so sick of waiting for the next version of DirectInput. I feel so empowered as a developer - or is that overworked?....I always get the two mixed up.
Look on the bright side, we'd be back to the situation in the mid-90s again, a new graphics API emerging every week, totally incompatible with one another. Do I get to wear baggy jeans again?

elFarto
08-11-2008, 08:52 AM
Well, that was a perfectly good waste of a year.

Regards
elFarto

dv
08-11-2008, 08:54 AM
Great. Khronos just killed OpenGL. Microsoft will win. Eventually, nvidia and ati will drop OpenGL support altogether, and hardware-accelerated 3D will only be possible on consoles and Windows.

What a sad day.

V-man
08-11-2008, 08:55 AM
I guess we are fucked?

Rob Barris
08-11-2008, 08:56 AM
As work is already underway for the next release, this is exactly the right time to bring up functionality that you want either as an OpenGL 3.0 extension or for the next core release. (Index buffer offsetting is on that list already)

Per the deprecation model, the next core release can eliminate many or all of the features marked deprecated in the 3.0 spec; along with that move will come the elimination of the associated state tracking for those features within the driver.

With regards to vector processing interfaces for generalized compute problems, Khronos is putting effort into OpenCL.

With regards to scheduling for the next release, it's targeted for less than 12 months from now. I anticipate some new GL3 extensions to appear between now and then also.

http://www.marketwatch.com/news/story/kh...95%7D&dist=hppr (http://www.marketwatch.com/news/story/khronos-releases-opengl-30-specifications/story.aspx?guid=%7BC2A3B5D7-CB9A-4898-BAF9-178DD8CFD695%7D&dist=hppr)

cass
08-11-2008, 08:59 AM
meanwhile back in the real world....I'll let you know how my new mouse driver's coming along, as I'm so sick of waiting for the next version of DirectInput. I feel so empowered as a developer - or is that overworked?....I always get the two mixed up.
Look on the bright side, we'd be back to the situation in the mid-90s again, a new graphics API emerging every week, totally incompatible with one another. Do I get to wear baggy jeans again?

Nevermind, you're right. We should pin all our hopes on the ARB and hardware vendors. They always come through.

bobvodka
08-11-2008, 09:01 AM
As work is already underway for the next release, this is exactly the right time to bring up functionality that you want either as an OpenGL 3.0 extension or for the next core release. (Index buffer offsetting is on that list already)

How about an Object Model which doesn't rely on bind to change, only to use... if you want I know some cool slides and newsletters you can look at for ideas...

niko
08-11-2008, 09:02 AM
With regards to scheduling for the next release, it's targeted for less than 12 months from now

Great, just "one" more year. V-man, we are fucked.

Eddy Luten
08-11-2008, 09:02 AM
With regards to scheduling for the next release, it's targeted for less than 12 months from now.

Deja vu. Rob, what about the object model?

tsuraan
08-11-2008, 09:07 AM
As work is already underway for the next release, this is exactly the right time to bring up functionality that you want either as an OpenGL 3.0 extension or for the next core release. (Index buffer offsetting is on that list already)

...snip...

With regards to scheduling for the next release, it's targeted for less than 12 months from now. I anticipate some new GL3 extensions to appear between now and then also.

So we just wait another "year" to see if you deign to throw us a bone? I don't think you understand the situation here. You have killed OpenGL. You've killed 3D support on all non-Windows PC platforms. Nobody's going to wait around for you to fail us again in another year. OpenGL is dead now.

Jan
08-11-2008, 09:10 AM
"As work is already underway for the next release, this is exactly the right time to bring up functionality that you want either as an OpenGL 3.0 extension or for the next core release."

I do not want ANY functionality for OpenGL anymore, i am going to install Vista on my main PC the next week.

"With regards to scheduling for the next release, it's targeted for less than 12 months from now."

"Haha. Oh wait, you meant that seriously, let me laugh even harder!"

No really, stick it where-ever you want, i don't care, whether it takes 2 days or two years for the next release of whatever, i'm done with OpenGL.

"Khronos Releases OpenGL 3.0 Specifications to Support Latest Generations of Programmable Graphics Hardware"

Well, isn't that a joke in itself?! "Latest hardware". How shall we program the hardware of today or tomorrow with an API from the stone-ages?

And i thought OpenGL 2.0 was a disappointment.

Small tip for everybody else: When you disable Vista's stupid verfication-question, that OS becomes quite usable.


Jan.


PS: Take a look in the spec at pages 457 to 459.
PPS: I am glad not to attend Siggraph for this [censored].

Rob Barris
08-11-2008, 09:10 AM
Tsuraan, can you enumerate what functionality you're looking for that isn't currently covered ?

Roderic (Ingenu)
08-11-2008, 09:15 AM
M. Rob Barris, what about having a simpler, GL3.0 only spec ?
(so that we only have things of relevance for people wanting to use the *new* thing alone [ie a forward-compatible context])

edit:
and a clean GLSL 1.3 spec to go with it. :)

tsuraan
08-11-2008, 09:22 AM
Tsuraan, can you enumerate what functionality you're looking for that isn't currently covered ?


I hardly do any graphics programming anymore. I've been following the thread for the past year, as I'm sure you have been, and I've been looking forward to a general cleanup of the API. I've been looking forward to seeing progress, to seeing that there really was some reason for the big silence of the past year. Instead, I'm seeing that the reason for the silence was that there really was nothing being done. We don't have anything new, no object model, just some extensions rolled into the core. That's find and well, but how does that justify a year's worth of secrecy? What leadership does OpenGL have? Who trusts this leadership anymore? Why will anybody use OpenGL in the future, when Microsoft has a more open and community-oriented approach to their API, and it seems unanimous that their API is cleaner and their documentation is better?

dv
08-11-2008, 09:31 AM
Tsuraan, can you enumerate what functionality you're looking for that isn't currently covered ?



We were promised that GL3 would be a clean break, a fully cleaned up API, with NO components of earlier GL versions. Functionality isn't really the issue - the API itself is.

What we got instead is yet more stuff added to the current - and overloaded - GL API. This really isn't GL3, its GL2.2. The ARB screwed us already with "GL2", which is just GL1.6. All Khronos slides about the totally new API, and immutable objects etc. were lies.

Jan
08-11-2008, 09:33 AM
... just as the cake ...

bobvodka
08-11-2008, 09:42 AM
As work is already underway for the next release, this is exactly the right time to bring up functionality that you want either as an OpenGL 3.0 extension or for the next core release. (Index buffer offsetting is on that list already)


(yay! replying to the same thing twice).

Honestly Rob, give us a reason to bother/care?
Because the last time we were asked we were presented with OpenGL3.0 which was accepted as the best thing to happen to OpenGL, got praise left, right and center and generally became a "DO WANT!" from the community.

Then, 2 years later, you turn around and drop all these plans because the CAD and related industries don't want it and have the cheek to turn around to us and say 'hey guys.. what do you want?'.

We told you, you gave us this.
If you want to know what people want in the future go ask those CAD people who made us end up with this because it's obvious at this point that what the general community wants doesn't matter and, in some regards, never did.

MZ
08-11-2008, 09:49 AM
We are approaching the end of the road with the conventional API models now though. On at least one next generation platform, software developers will be able to write their own GL and make it as small and efficient as they like.
I understand you're speaking about unspecified near future, but I'd like to look at it from the point of view of tools available today: Cuda/CTM. I think there is still a lot of GPU functionality of not covered by either of these GPGPU APIs:

- blend, stencil and depth test/ops
- early/hierarchical depth culling
- optimization of pixel pipeline by running it's components at different frequencies: per-sample, per-pixel and per-quad.

Obviously, everything is doable in software, but I guess stuff like ROP units are 100% fixed-function hardware till today for a reason. Unless something changes in either HW (every GPU becomes Larrabee-like) or GPGPU APIs (expose stuff above), home-made rasterisation API won't be even an option.

MZ
08-11-2008, 09:53 AM
By the way, I think we shouldn't focus our anger at the only ARB member who bothered to keep (limited, but better than none) contact with the community. ARB is group body, and certainly there are members there who wanted to deliver what we all hoped for. They have lost, and we are fucked. I'm being calm about this fiasco only because I already did my share of beating dead horse in GL 2.0 times.

santyhamer
08-11-2008, 09:53 AM
woot! OGL 3.0 is out!

http://www.opengl.org/registry/doc/glspec30.20080811.pdf
http://www.khronos.org/news/press/releas...generations_of/ (http://www.khronos.org/news/press/releases/khronos_releases_opengl_30_specifications_to_suppo rt_latest_generations_of/)

bobvodka
08-11-2008, 09:58 AM
You might want to read back a few pages santyhamer; I suggest getting your ragepants on...

Brianj
08-11-2008, 10:04 AM
Is it possible to get a list of the companies that voted against the rewrite for the 3.0 release? Just to correctly focus anger at, and for some other reasons...

glDan
08-11-2008, 10:07 AM
It is a pretty sad day. :sorrow:

It also means that the Khronos group should be ashamed at themselves.

There is no logical reason for naming it OpenGL 3.0 instead of 2.2/3/45/6/7/8/9.

Why do I get the feeling that the reason behind the huge delay was the infighting, and in the end, the only winners out of this whole process is the CAD industry, which apparently knows what is best for the future of openGL?

The Khronos group should resign after this folly.
They have not shown ANY leadership, and only ended up pissing off the entire openGL community.

Nice one guys!
:sick:

Eckos
08-11-2008, 10:17 AM
This is [censored] horse [censored]. We wait for 2 years for nothing?

Thanks CAD people for screwing us over because your so damn lazy to update your piece of junk software. Man it seems the only way to go about anything now is the Microsoft way, so Microsoft wins yet again. I see why no games are developed in OpenGL now.

When will we ever get a real pure OpenGL 3.0 instead of this junk.

Hopefully I can find something on the web for getting Qt4 with Direct3D 10 or i'mma just give up on GameDev all together.

Khronos_webmaster
08-11-2008, 10:22 AM
There is a new thread started here as this one is getting a bit long and is not on topic anymore since the release:

http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=243193#Post243193

mfort
08-11-2008, 10:25 AM
Please calm down. I really understand CAD-like companies. Making new API with no backward compatibility would brake it all.
It would be easier for them to go to DX10 now instead of waiting for and implementing OpenGL3.0.

Microsoft can afford to make clean cut in DX versions. They dont care of legacy SW. Games are written against particular DX version.
Professional applications are different. They evolve at much slower pace. If you are writing small apps or games, go to DX.

I am developing OpenGL apps for more then 10 years, started on SGI. We have lot of OpenGL apps and it is absolutely impossible to port them all to new API in reasonable time.

What is very important in the whole 3.0 spec is the chapter E.1
This is what we are all waiting for. This is what tells me what features we have to avoid to be ready for API change.

I am pretty sure that in near future the drivers will check the parameter WGL_CONTEXT_FORWARD_COMPATIBLE_BIT_ARB to wglCreateContextAttribsARB and all the OpenGL will run on different drivers (hopefully less buggy).

Be reasonable. Miracles don't happen.

/Marek

LogicalError
08-11-2008, 10:32 AM
Well that's that then. Goodbye OpenGL.
I'm going to install vista so i can use d3d10.
Really feel sorry for you linux folks, you got the worst side of the deal.
sigh.

H. Guijt
08-11-2008, 10:35 AM
Tsuraan, can you enumerate what functionality you're looking for that isn't currently covered ?


I'm not Tsuraan, but I was under the impression that we would get an API that would be easier to use, easier to write drivers for, and easier to get good performance out of. For all these things it is CRITICAL that the size of the API be reduced.

This has not happened: we get the same old API plus a whole bunch of new extensions. Developing for OpenGL is therefore still as much a minefield as before, with availability of features, and performance thereof, essentially completely unpredictable from system to system. So the major failure, here, is not that new features are missing; it is that old features have too much unpredictability built in.

A secondary failure is the complete loss of trust incurred by Khronos, after delivering so little after so much time. Nothing you say will now make any difference now; your credibility has evaporated. What does it matter if a shiny new API will be available in a year? We will all be programming DirectX 11 by then, and think of OpenGL as "that cool thing that could have made a difference if only it had been handled a bit better".

I haven't posted on this forum before (but I lurked for a very long time), but honestly, that the ARB couldn't see this coming and therefore has to ask questions like yours is beyond my ability to understand...

Zengar
08-11-2008, 10:42 AM
Amen!

dor00
08-11-2008, 10:51 AM
Well that's that then. Goodbye OpenGL.
I'm going to install vista so i can use d3d10.
Really feel sorry for you linux folks, you got the worst side of the deal.
sigh.

Indeed.. an idea come in my mind right know, who win here...

bobvodka
08-11-2008, 11:11 AM
We will all be programming DirectX 11 by then

I was at GameFest2008 in London last wednesday, I've seen whats coming with DX11 and even the DX10/10.1 cards get an improvement with free thread functions for resource creation and secondary rendering threads to create display list for play back. It looks nice and I'm considering skipping DX10 and programming to the DX11 model when the beta/preview SDK hits in november.

Yes, that's right, November this year.

HAL-10K
08-11-2008, 11:12 AM
Please calm down. I really understand CAD-like companies. Making new API with no backward compatibility would brake it all.No. Why do you think that?
You could still use drivers written against the old API spec.
I think that IHVs would continue driver support for quite some time even for future hardware.

Software that is developing too slowly to be able to switch to a new API is usually no subject to a fast pace of new hardware anyway.


It would be easier for them to go to DX10 now instead of waiting for and implementing OpenGL3.0.Certainly no argument to fall behind even more after all these years.


If you are writing small apps or games, go to DX.I hope Apple is celebrating to destroy their chance to be a part of a multi-billion dollar market!
(What were they expecting with such a minimalistic presence at the board?!)

H. Guijt
08-11-2008, 11:16 AM
Ok, I need to moderate my previous reply a little because I was a bit too harsh. Section E.1 indeed removes a lot of cruft, although maybe not enough.

I'd like to point out that the specification is still in dire need of a rewrite: it goes to considerable length to describe features (line stippling, just to name one example) that are already marked as deprecated. That's unhelpful, and no doubt part of the reason for the heat in this forum today.

And was I really wrong to expect OpenGL 3.0 to start with a list of zero extensions? I.e. just a core specification, completely cleaned of all the cruft, and without any need for any vendor-specific stuff?

On a more personal note, I notice that chapter E.1 could be restated as "every single OpenGL feature as used by H. Guijt in any of the work he did with OpenGL in the last four years." Apparently I have a knack for finding the API paths that are the worst possible choice. This is NOT a complaint that you are removing "all my functions" though, but rather a complaint that somehow none of the documentation that I read (the red book, mostly) told me that I was on the wrong path. Same as my complaint above, then: the specs guide you down the wrong paths.

A final question: will we be getting include files where we can turn deprecated features off completely using some #define? Because that would be extremely useful - assuming there is still anyone left using OpenGL after today, of course.

LogicalError
08-11-2008, 11:17 AM
If i would've gone to Siggraph this year, I'd go the OpenGL BOF just to BOO at them, and then leave.

Timothy Farrar
08-11-2008, 11:36 AM
IMO, a few of you might want to take a few moments to get control of your emotions and look at this from a more objective and logical perspective.

GL3 is a better match for current hardware, GL3 has much of DX10 level functionality. Vendors most likely on board with providing fully functional GL3 drivers (NVidia drivers in September right?, don't know about the other vendors, but I'm sure this will be public soon). I think this is great news.

Functionality wise there are some rather awesome improvements here,
* integer support
* interpolation control
* precision qualifiers for older hardware
* the invariant qualifier
* deprecated fixed function support
* array texture support
* texel fetch (integer coordinates)
* sRGB support!
* 10/11-bit floating point support
* conditional rendering
* fine control of buffer sub-regions

Personally I could care less what the API looks like as long as it is fast, it supports the hardware features, and all vendors actually provide drivers.

Leadwerks
08-11-2008, 11:40 AM
I don't even understand what most of those features mean. Am I just stupid? I don't know, I only wrote an engine with OpenGL, maybe I am not that smart.

Conditional rendering is useless for instanced rendering. Since I am already using queries, and the NV extensions was already available and I haven't bothered with it, I doubt I will.

I don't understand why CAD companies blocked this. If they don't want to improve their software, why not just keeping using OpenGL 2.1?

I don't think developing for DX11 is smart when only about 15% of computers are using Vista. I'll just keep using OpenGL 2.1 and hope that Intel comes up with something more open-ended with Larabee.

mfort
08-11-2008, 11:41 AM
Please calm down. I really understand CAD-like companies. Making new API with no backward compatibility would brake it all.No. Why do you think that?
You could still use drivers written against the old API spec.
I think that IHVs would continue driver support for quite some time even for future hardware.

Why? It is simple. CAD like companies want to add few new OpenGL features to their renderer once a while without rewriting it all from scratch. Imagine they freeze OpenGL 2.x and offer new features only in OpenGL 3.0. Then the new features will never appear the large older applications.

Please also keep in mind that OpenGL is used for many non rasterizing graphical tasks, like GPGPU. (I am one of them as well). OpenGL is so wrong API for this but don't blame OpenGL for that. We have CUDA now, may OpenCL is coming. Raytracing rendering will probably replace all OpenGL/DX*. can you imagine OpenGL twisted for a ray tracing API?


/Marek

LogicalError
08-11-2008, 11:42 AM
GL3 is a better match for current hardware

Oh really? which part of the gazillion different extensions?


GL3 has much of DX10 level functionality.

And DX10 had full DX10 level functionality since.. well.. when it was released several years back!


Personally I could care less what the API looks like as long as it is fast, it supports the hardware features, and all vendors actually provide drivers.

wait.. are you talking about DX10 now?
because that certainly doesn't sound like opengl!

Edit: yeah. sorry, i'm pissed off, please don't take it personally.

bobvodka
08-11-2008, 11:43 AM
Personally I could care less what the API looks like as long as it is fast, it supports the hardware features, and all vendors actually provide drivers.

Except you are already dropping speed as the hardware drifts away from the API (the D3D10 model, what GL3.0 looked like before this... thing... is closer) and reliable drivers are harder to write because the new features have to interact with the old.

The fact of the matter is right now Intel's GL drivers suck and with more complication I don't expect this to improve. Even AMD's drivers have problems and a lack of features, which will delay their GL3.0 release as well and god knows how stable it will be.

For OpenGL2.2 it would have been a nice release, for a GL3.0 it's just another GL2.0 sized blunder and compromise.

Rob Barris
08-11-2008, 11:43 AM
One feature of GL 3.0 is a mechanism for requesting a context where the deprecations are enforced - this was put in to give developers a way to run their apps in this mode early on.

There is a lot of alignment between the feature set marked deprecated in GL 3.0 and the set of features that Long's Peak was slated to remove.

bobvodka
08-11-2008, 11:46 AM
Why? It is simple. CAD like companies want to add few new OpenGL features to their renderer once a while without rewriting it all from scratch. Imagine they freeze OpenGL 2.x and offer new features only in OpenGL 3.0. Then the new features will never appear the large older applications.

And as I recall there was ALWAYS plans to allow a crossover of GL2.x and Longs Peak features via the same context for just this sort of thing so companies could migrate. I guess the plans weren't good enough.

MZ
08-11-2008, 11:57 AM
One feature of GL 3.0 is a mechanism for requesting a context where the deprecations are enforced - this was put in to give developers a way to run their apps in this mode early on.
What are supposed short term/long term benefits of running GL app in the deprecated mode?

ZbuffeR
08-11-2008, 12:00 PM
There is a lot of alignment between the feature set marked deprecated in GL 3.0 and the set of features that Long's Peak was slated to remove.

Great, where can I download a GL 3.0 pdf spec without all the deprecated features now ?
Because currently it means "read the spec" - "skip to the end to check if this or that feature is deprecated" - "loop" and is very messy to read totally useless stuff.

Korval
08-11-2008, 12:10 PM
As work is already underway for the next release, this is exactly the right time to bring up functionality that you want either as an OpenGL 3.0 extension or for the next core release.

OK, if you want. I mean, I don't care about OpenGL anymore, and I'll be abandoning OpenGL in the future, but if you insist:

GL "3.0" is not Longs Peak. It doesn't align with all (or even most) of the features of Longs Peak, even if you ignore the lack of the new object model. Here's a "short list" of features that Longs Peak promised that GL "3.0" doesn't deliver:

- Getting rid of GL_FRAMEBUFFER UNSUPPORTED. That is, providing a mechanism that makes this impossible, that allows a user to know beforehand whether a particular combination of images and state is allowed by the implementation. Without this, FBO still has the same problems it always did.

- Instancing programs. GL "3.0" program objects are still bound directly to their data. It is often the case that you want to use the same code, but with completely different sets of data. This requires constantly respecifying uniforms and attributes every time you want to use it. This is not conducive to performance. Longs Peak had "Program Environment Objects" which stored the state for a program. This also had the benefit of preventing stupid implementations (I'm looking at you, nVidia) from recompiling your program just because you changed a uniform.

- The ability to "mix&match" programs of different types. That is, the ability to use a vertex program that you did a full compile/link on and a fragment program that you did a full compile/link on that were not explicitly linked together. GL "3.0" still uses the 2.x model.

- Image Formats. These were supposed to be the way that you could ensure that images of a specific format could be created and used as you wanted. If you wanted to create a floating-point texture that required bilinear filtering, you needed an image format first. And if the image format creation failed, then you know you couldn't do it because the implementation didn't support it. GL "3.0" doesn't have this.

- Decoupling of Image from TexParameter. Longs Peak had "Texture Filter" objects that represented TexParameter stuff. Much like the program instance Program Environment object, this allowed a decoupling of data (Image, Program) from its use (TexParameter, uniforms and attributes).

- Forced "Hints". Buffer objects in particular were supposed to get hints that weren't actually hints; they were forced. Attempting to do certain operations with buffer objects that were set up incorrectly would produce non-functional code. GL "3.0" still has you able to map from any kind of buffer object and other nonsense.

- The ability to use it on 2.x hardware (Radeon 9700+/GeForceFX+). That was an explicit feature of Longs Peak: it could all be used on older DX9-compatible hardware. GL "3.0" requires DX10-class hardware, even though it doesn't even provide access to the main DX10 features (geometry shaders, uniform buffers, etc).

These were not minor features of LP; they were vital. They characterized what LP was: a way to do graphics while taking out the guesswork and needless weirdness. No "deprecated context" nonsense will bring these features back. And they weren't bound to the new object model; they were features that were introduced because API cleanup was happening.

HAL-10K
08-11-2008, 12:38 PM
Why? It is simple. CAD like companies want to add few new OpenGL features to their renderer once a while without rewriting it all from scratch. Imagine they freeze OpenGL 2.x and offer new features only in OpenGL 3.0. Then the new features will never appear the large older applications.That some decade old CAD application won't be able to use state of the art tesselation programs or whatever without rewriting the API interface at some point, is no rational reason to keep backwards compatibility on an API that is now going to be SEVEN years behind it's competitor in a very fast-pased market.

This is reason enough for us to abandon the 7% of our costumers on the MacOS platform.

Brolingstanz
08-11-2008, 01:02 PM
I think some of us need to read the new specifications again. I think there's more to this deprecation model than first meets the eye ;-)

On point, I don't see what's preventing a new object model - or new anything else for that matter - from being introduced and deprecating the relevant parts as needed. If the driver only has to support the context version requested, it's pretty much all gravy.

tsuraan
08-11-2008, 01:08 PM
On point, I don't see what's preventing a new object model - or new anything else for that matter - from being introduced and deprecating the relevant parts as needed. If the driver only has to support the context version requested, it's pretty much all gravy.

I think the issue is that if that didn't happen in the past year of work, why would anybody think it's ever going to? Can NVidia (or ATi) introduct a new object model as an extension? Can they meet korval's wishlist independently, without real core changes? I don't think anybody's saying that OpenGL 3.0 is lacking the ability to do basic graphics operations, but I don't see any reason to expect progress in the future, after this debacle.

FeepingCreature
08-11-2008, 01:30 PM
Right, so .. I'm kind of a hobbyist, but I was looking forward to playing around with a clean, understandable core API in 3.0.

Since this has obviously not come to pass, it might be worth an attempt to create an alternative cross-platform graphics API optimized for games, sort of an "everything you wanted from GL3".

Such an API would have to wrap OpenGL internally, at least at first, and would thus probably be slower, at least at first, but when Larrabee comes around, it should be possible to implement it in a way that is competitive with OpenGL on the same platform. (Glee! Direct multicore hardware access!)

And after all, it certainly won't be nearly as daunting a task as implementing OpenGL 3.0!

If you want to talk about it, I've put up a wiki page on my home server here (http://demented.no-ip.org/dw/doku.php?id=gl3wishes), though it might be more appropriate to use a public wiki hosting service (the above server is running on a 50K/s upload DSL connection :p It should be sufficient for now though).

elFarto
08-11-2008, 02:22 PM
I've just been pointed to this (http://www.opengl.org/registry/specs/EXT/direct_state_access.txt) extension.

Regards
elFarto

Korval
08-11-2008, 02:33 PM
I've just been pointed to this extension.

When that extension is written against "3.0" instead of 2.1, and that extension is a core feature of "3.0", we can talk. Until then, it's just nVidia's API again.

Rob Barris
08-11-2008, 02:53 PM
I think feedback on the direct state access extension would be very valuable to the working group. Not talking about it "until it's core" creates a chicken and egg situation. If a lot of developers weigh in on the particular strengths or weaknesses of the ext, then that will provide a useful signal to the working group.

bobvodka
08-11-2008, 02:59 PM
You don't get it do you; we gave you feed back on the idea and then without bothering to even let us know there was going to be a change you dump this.. thing.. on us.

Now you expect us to turn around and say 'of course we'll let you know, we don't care that we've basically been ignored in the past and probably will in the future...'.

We gave feedback and this is how we were treated, sorry no dice; you can't pissed off you community and then ask for help, it doesn't work that way.

ector
08-11-2008, 03:10 PM
I think feedback on the direct state access extension would be very valuable to the working group. Not talking about it "until it's core" creates a chicken and egg situation. If a lot of developers weigh in on the particular strengths or weaknesses of the ext, then that will provide a useful signal to the working group.



Rob, please ignore bobvodka, he's a bit upset.

I am also disappointed that we didn't get what was promised, but I can also see why.

I think the idea of the direct state access extension makes a lot of sense. This should have happened 1998, at the latest! But what's done is done.

Now, one very important thing with this extension, as with the "legacy-free" mode, is a #define that you can set in front of the OpenGL-headers, that make old-style code illegal, by simply not declaring the functions. A few clever #ifdefs should take care of that.

My personal top priorities for OpenGL, which I am very disappointed that they are not being fixed, are:

* Bring back the constant registers from Cg/HLSL. I want to be able to set global matrices and share them between shaders in an easy way without having to compile the shaders together.

* Make it possible to mix and match vertex and fragment shaders, again, without having to compile them together.

My top hobby project, Dolphin the Gamecube emulator, cannot use GLSL because of these issues, instead it has to fall back on Cg as as compiler and ARB_vertex_program/fragment_program for uploading shaders. It generates vertex and pixel shaders on the fly separately and mixes at matches them wildly. And there's a LOT of state that can be shared in constant registers between the generated shaders.

Having to upload all this state per shader, like current GLSL, is slow and unwieldy, and as soon as any of it changes, it has to be uploaded to ALL shaders used subsequently, separately. Also to use GLSL we'd have to make a cache of pairs of vertex+fragment shaders compiled together, further complicating things.

Oh, and I'd love to get rid of nonsense like having the origin for various things in the bottom left corner instead of top left corner.

Rob Barris
08-11-2008, 03:21 PM
Ector those classes of issues you raised are very familiar to us (esp having to maintain a cache of linked shader pairs, and the issues surrounding efficient transmission/broadcast of uniform values to one or more programs).

At the BOF we will be sharing more info about what is in the queue for upcoming extensions and core revisions. This is not to say that there is no other channel for communication, just that we're sticking with that schedule for that topic.

bobvodka
08-11-2008, 03:22 PM
Rob, please ignore bobvodka, he's a bit upset.


Yes, well, having lived through the OpenGL2.0 farce and now the epic fail which is GL3.0 (and the waste which was the last two years) I feel I've a right to be a little pissed, more so when the people who have pulled this and have previously apprently dismissed our feedback ask for more...

Still, I don't care... I'm here until my annoyance goes away and then I'm gone until something decent happens; frankly D3D10/11 looks hella nice and I've never cared about OSX and Linux anyway...

Korval
08-11-2008, 03:50 PM
Ector those classes of issues you raised are very familiar to us

And yet, after 2 years, nothing has been done about it! You didn't need a new object model or new API to do those things.

What part of YOU FAILED! don't you get?

n00body
08-11-2008, 10:22 PM
Firstly, not 100% happy that certain promised features were cut, but I can understand the reasoning for it. Better to have it out when it's "done".

Secondly, I'm happy that much of the cruft has been stripped out, and many features that needed to be core functionality have been made so.

Thirdly, I'm curious, will there be code examples soon after BoF? It's impossible to make an informed judgment without working code in front of me.

Thanks.

EDIT:
In particular, I'd like to see actual code usage of the new direct_access extension. Since it looks like the most dramatic change to arrive with OGL 3.0, I'd like to see how to use it.

EDIT2:
My only other comment, to any Khronos members is this: Dropping an iron curtain on all outgoing info for two years was both painful and unnecessary. A mistake that shouldn't be repeated.

ector
08-12-2008, 01:29 AM
EDIT2:
My only other comment, to any Khronos members is this: Dropping an iron curtain on all outgoing info for two years was both painful and unnecessary. A mistake that shouldn't be repeated.

Yup, it's pretty sad to see that Microsoft is very open, shares information a year in advance and looks for feedback, and also DELIVERS ON SCHEDULE, while ARB/Krhonos, well, you know what I mean...

tanzanite
08-12-2008, 03:08 AM
Firstly, not 100% happy that certain promised features were cut, but I can understand the reasoning for it. Better to have it out when it's "done".I think we are done with that dance.

Lumooja
08-12-2008, 06:49 AM
OpenGL 3 will not have support for the GL_DOUBLE token. This means it will not be possible to send double precision vertex data to OpenGL.
I wonder how many people know that double is almost 3 times faster than float (2.875 to be exact).

Try it yourself:
// doublespeed.cpp

#include <stdio.h>
#include <time.h>


inline void floatrun(void)
{
float m=0.0f;
float n=0.0f;
long t1=0;
long t2=0;
long i=0;
printf("Counting 1.4 billion floats...\n");
t1=clock();
while(m<1000.0f)
{
n=0.0f;
while(n<1000.0f)
{
n+=0.01f;
i++;
}
m+=0.01f;
}
t2=clock();
printf("Done. i=%ld, n=%f, time=%fs.\n",i,n,
(double)(t2-t1)/CLOCKS_PER_SEC);
}



inline void doublerun(void)
{
double m=0.0;
double n=0.0;
long t1=0;
long t2=0;
long i=0;
printf("Counting 1.4 billion doubles...\n");
t1=clock();
while(m<1000.0)
{
n=0.0;
while(n<1000.0)
{
n+=0.01;
i++;
}
m+=0.01;
}
t2=clock();
printf("Done. i=%ld, n=%f, time=%fs.\n",i,n,
(double)(t2-t1)/CLOCKS_PER_SEC);
}


int main(int argc, char** argv)
{
floatrun();
doublerun();
floatrun();
doublerun();
return 0;
}

bobvodka
08-12-2008, 06:58 AM
That benchmark looks like junk, but aside from that current GPU hardware has very little double support (iirc the GTX series can do double in hardware at around 1/4 the speed of a floating point operation and using more resources) and previous generations had none at all.

In short; what you wrote was utter rubbish.

Jan
08-12-2008, 07:10 AM
You are doing the printf AFTER you started your timer. That is complete crap.

Lumooja
08-12-2008, 07:12 AM
ATI is releasing next month 2 new cards which use double precision:

http://ati.amd.com/technology/streamcomputing/product_firestream_9250.html

Maybe there I would get similar results like on my CPU double vs float speed test. And it's not only about speed, the horrible accuracy of float is a big problem with large distance objects (like shadows of directional lights) in games and applications.

Lumooja
08-12-2008, 07:17 AM
You are doing the printf AFTER you started your timer. That is complete crap.
I moved the printf before the timer start, but it didn't make any difference since it was done the same way for both functions.

But you're right, now the timing results have more reference value.

bobvodka
08-12-2008, 07:26 AM
ATI is releasing next month 2 new cards which use double precision:

http://ati.amd.com/technology/streamcomputing/product_firestream_9250.html

Maybe there I would get similar results like on my CPU double vs float speed test.


No, because double is also just "crippled" in AMD's hardware.



And it's not only about speed, the horrible accuracy of float is a big problem with large distance objects (like shadows of directional lights) in games and applications.

In games it's all about the speed vs accuracy tradeoff; more speed at the cost of accuracy is acceptable and losing 1/4 of your ALU power for a gain in accuracy is NOT good for games.

bobvodka
08-12-2008, 07:28 AM
You are doing the printf AFTER you started your timer. That is complete crap.
I moved the printf before the timer start, but it didn't make any difference since it was done the same way for both functions.

But you're right, now the timing results have more reference value.

Your test is still junk, certainly without knowing what compiler and settings you used.

For example, with SSE you can process 4 floats vs 2 double (and that double is 64bit) without SSE you take a possible speed hit due to 32bit -> 80bit conversion when things get punted to the FPU and then back again.

Lumooja
08-12-2008, 07:46 AM
Your test is still junk, certainly without knowing what compiler and settings you used.

For example, with SSE you can process 4 floats vs 2 double (and that double is 64bit) without SSE you take a possible speed hit due to 32bit -> 80bit conversion when things get punted to the FPU and then back again.
I tested first with default Visual C++ 2008 Pro compiler settings for 32-bit Console Application.

Now I changed the compiler setting to SSE, and also SSE2, and the results are almost the same. Only in SSE2 mode the float function gets slightly slower (42sec in SSE2 vs 39sec in NoSSE and SSE).

Zengar
08-12-2008, 08:14 AM
Lumooja, once again: your test has nothing to do with GPU capabilities and is strange anyway, as you use bad branching therefore killing most CPU optimisation options. If you really want to test FP performance, try measuring matrix multiplications: this is a good test. Please learn something about CPU optimizations and restrain yourseld from continuing this pointless discussion. If you want to discuss this topic, please fell free to post your so called "benchmarks" into the suggestion forum.

Jan
08-12-2008, 09:12 AM
What i totally don't get is this:

OpenGL "3.0" makes the whole spec A LOT more complicated, because it adds lots of things to the core, which need to interact with the legacy stuff.

However, it deprecates everything (and more), that we wanted to go away.

If i understand that correct, that means 3.1 will be 3.0 without all the deprecated stuff (plus maybe some additional features).


THAT would mean, that a 3.1 driver would be far simpler, because it does not need to include the legacy stuff.

Sooo, if i were a driver-writer, I would simply wait for 3.1 and only ship that. For PR that would be much better (3.1 > 3.0) AND it would be easier to maintain AND all developers would be happy.

Or did i understand that deprecation-thingy somehow wrong ?

Now the only missing piece is a 3.1 spec, so ARB, pleeease, could you simply remove the deprecated stuff from the spec and call it 3.1 (or 3.0.9) ?

Jan.

Rob Barris
08-12-2008, 10:11 AM
So a 3.0 "full" context has everything described in the spec. Call that X.

A 3.0 "forward compatible" context has everything in 3.0 minus Appendix E features. Call that Y; and Y<X.

A 3.1 "full" context would have Y plus new stuff. Call that Z. Z > Y.

Vendors can already release drivers for X and Y at will (it should be one driver with two modes) - the currently published spec covers both paths.

However no one can produce a driver for Z until the extra stuff that goes into Z is decided. There are features that are currently written up as GL3.0 extensions that could be eligible, there are some other things under consideration in the working group.

We know the spec is big - but let's look at it from the POV of each audience:

developers: if you want to get on the clean path looking forward to 3.1, just use the 3.0 spec and observe Appendix E. (When you get a 3.0 driver, engage the forward compatible context mode to help find paths in your app that are still using deprecated features).

driver writers (IHV's): have all had their say in the working group - and the consensus established was to stay fully upward compatible for 3.0 but to advertise the agreed-upon set of deprecated functionality - then to move ahead with subsequent releases that are leaner.

So I am just curious, if you advocate for a document that basically has some sections of text deleted, how would you assess the benefit for each of those audiences ? (Just to be clear, I don't see an argument against doing it as an exercise or a developer guide, however it would not be a doc that you could call 3.1, see "Z" above).

Korval
08-12-2008, 10:35 AM
So a 3.0 "full" context has everything described in the spec. Call that X.

A 3.0 "forward compatible" context has everything in 3.0 minus Appendix E features. Call that Y; and Y<X.

A 3.1 "full" context would have Y plus new stuff.

We keep running around in this circle, so I'll make my meaning plain.

You've lied to us too many times to believe a word you've said. There is no promise you can make that a reasonable person would trust.

Zengar
08-12-2008, 10:50 AM
We keep running around in this circle, so I'll make my meaning plain.

You've lied to us too many times to believe a word you've said. There is no promise you can make that a reasonable person would trust.

So we'll just have to wait and see how this promises are kept... Anyway, OpenGL will be stagnant for the next few years, because after the 3.0 fiasco lots of the developers will be turning to DX. I hope Apple could license and implement D3D9/D3D10 for MacOS...

Rob Barris
08-12-2008, 10:56 AM
I'm sorry you feel that way Korval - not all project management decisions are easy ones. The one made to table LP was extremely difficult.

spooky_paul
08-12-2008, 11:24 AM
I am as the most developers around quite disappointed regarding the 3.0 but I wonder if attacking Rob will help anything... I don't believe that he is the evil mastermind behind the ARB.

Could we get the February meeting transcript? (I would like get a honest answer if declined, not some marketing bs)

Jan
08-12-2008, 11:27 AM
The problem that i have with the current situation is, that it will take months to develop proper GL3 drivers, whereas it should be much easier to develop 3.1 drivers (or as you said, 3.0 drivers with all deprecated stuff removed).

This seems to be an unnecessary blockade. nVidia might implement this, but i do not think, that ATI will do so, but that they will only implement the restricted spec (which most developers only want anyway).

So, to speed up the process, can't we just get 3.1 as 3.0 without the deprecated stuff, and put any new features into 3.2 ? That would at least speed things up. nVidia can implement 3.0, at least for their Quadro-cards, to please those evil CAD-developers, and everybody else can go on with their miserable lives, but with a much cleaner API.

I mean, after having disappointed/lost most of the community, it would be a good step not to keep the remaining developers waiting much longer for a clean-up, even if it is only the fraction of a clean-up, that is left from the original intent.

I will definitely switch to D3D for my personal projects, but there remain other projects, where i am forced to use OpenGL (linux). So i still have interest in a quick improvement to the current situation.

Jan.

Korval
08-12-2008, 11:39 AM
So, to speed up the process, can't we just get 3.1 as 3.0 without the deprecated stuff, and put any new features into 3.2 ?

My main concern with "3.1" is the main problem with "3.0": the deprecation of bad APIs is being intermingled with rendering features.

If you want to use the good API with D3D 9-class hardware, you're out of luck. You can only use a 2.1 context, which does not provide for "hard-deprecation". Which means that it will not have any of the potential optimizations that IHVs can make when "hard-deprecation" is active.

They should have decoupled these. All rendering features should have been core features, and all API deprecation should have been based on extensions. So you have the "2.1 deprecated" context, which removes cruft from 2.1. Then you have the "3.0 deprecated" context, which removes cruft from 3.0. And so on.

Eventually, when you are ready to remove the cruft permanently, you just say that version "X.Y" no longer provides a "full" context.

Mars_999
08-12-2008, 04:43 PM
From tomshardware
http://www.tomshardware.com/news/nvidia-physx-physics,6115.html

If you own one of the 80 million CUDA graphics cards!

Like I said 10's of millions of DX10 class hardware cards are in the wild, and that is just Nvidia!!! That is not a tiny market.

Brianj
08-12-2008, 05:28 PM
It's not surprising they're going to have financial troubles. This looks a lot like the period when Sony was just about to release the PS3 (no one likes arrogance). They let the success of the G80 series get to them - they thought they could get away with anything. Releasing overpriced GT200 card was not a good idea, and now that ATI has the price/performance lead nVidia is going to feel the pain for a while.

TroutButter
08-12-2008, 05:37 PM
EDIT:
In particular, I'd like to see actual code usage of the new direct_access extension. Since it looks like the most dramatic change to arrive with OGL 3.0, I'd like to see how to use it.


If you would read the spec, you'd see there are examples which should be enough to know what's going on here.

This is the only good thing (great actually) that's coming out of OpenGL right now. As John Carmack said, this should have happened a long time ago.

Korval
08-12-2008, 05:46 PM
This is the only good thing (great actually) that's coming out of OpenGL right now.

Except that it's written against GL 2.1, so it doesn't work with the new stuff in "3.0" (mainly VAOs). Also, it's just an EXT extension, so expect ATi in their eternal laziness to ignore it.

TroutButter
08-12-2008, 07:25 PM
That's true, but I feel more people will care about what this extension provides than what 3.0 provides right now. This extension can/will make a bigger difference than the 3.0 stuff.

It's unfortunate that ATI has to be such douche bags and ignore anything unless it's ARB. Who ever is running that circus they call ATI must be mentally retarded or something. "Oh no, let's not implement anything that will help us and our customers. Instead let's just f**k ourselves even more!"

Korval
08-12-2008, 07:30 PM
This extension can/will make a bigger difference than the 3.0 stuff.

No, it won't. As I pointed out, because ATi won't support it, it won't get widespread use. And the main purpose behind these functions is to make IHVs jobs easier. Well, they still have to support the old way of setting this stuff, since it is still core (not even deprecated in "3.0"). So it does nothing for making drivers easier to write.

SirKnight
08-12-2008, 07:48 PM
What he means is on the hardware that supports the extension.

Actually the main purpose of this extension (I know this first hand since I work with Mark Kilgard) is to eliminate the performance killer of doing lots of binds and gets. Having to always do a get, bind and re-bind of the old value from the get is a big killer of performance. This kind of thing is needed especially for middleware libraries (like Cg) that need to make GL calls but can not disrupt the apps state.

This has been one of the biggest performance problems in the GL Cg runtime. It has been discussed a lot in the Cg runtime group and Mark realized that this extension could be made to eliminate this problem. People for a long time complained about how we do way to many gets and binds. This extension solves that quite well.

This extension is what one the best things to happen to GL in a long time and it would be a real shame if ATI ignored it. The cool thing is this extension helps everyone, not just Cg even though Cg was the main driving force.

Zengar
08-12-2008, 07:57 PM
All hardware will support that extension, they just have to put it in the driver (and it is possibly already there, indirectly).

SirKnight
08-12-2008, 08:06 PM
See that's the thing. The driver ALREADY did "direct state access" internally. So implementation was quite "simple" compared to other extensions. I would find it surprising that ATI's driver does not already do this internally as well.

Maybe I should have said DRIVERS that support the extension rather than hardware. That's what I meant anyway. ;)

pudman
08-12-2008, 08:20 PM
Given that it's written against 2.1, and it's such a great thing, why not release it earlier?


Well, they still have to support the old way of setting this stuff, since it is still core (not even deprecated in "3.0").

My guess is that in order for things to be deprecated the replacement must first actually be in the core. So, if only they had released this earlier they could have incorporated it into the core in 3.0, deprecating the previous approach.

Or they could have pushed harder for the LP object model.

SirKnight
08-12-2008, 08:42 PM
Given that it's written against 2.1, and it's such a great thing, why not release it earlier?


Part of that is due to the time it takes from something to be implemented in the driver to the time it makes it into a release branch then out to the public. It usually takes months for something to go through the process even though it's "ready."

Also there was a long debate about getting this extension accepted in the first place. At the time, GL 3.0 was very different than it is now so there was debate of the usefulness of it. But in the Cg runtime team, we saw it as critical for performance and waiting for GL 3.0 didn't seem like a good idea (turned out to be a valid assumption :)).

It is quite unfortunate that it took this long for something like this to come out. It's something GL needed YEARS ago. But that's just how it goes sometimes.

Korval
08-12-2008, 08:43 PM
My guess is that in order for things to be deprecated the replacement must first actually be in the core.

Yes, that was the point. Because they didn't get this extension into the core, it would be until 3.2 at the very least before IHVs can make the all-important assumption that any binding to the context means that you intend to render with the object and not modify it.

Mark Kilgard
08-13-2008, 02:48 PM
EDIT:
In particular, I'd like to see actual code usage of the new direct_access extension. Since it looks like the most dramatic change to arrive with OGL 3.0, I'd like to see how to use it.

The EXT_direct_state_access extension is easy to use.

Before after/examples:

// before DSA
glActiveTexture(GL_TEXTURE2);
glBindTexture(GL_TEXTURE_2D, texobj);
glEnable(GL_TEXTURE_2D);
/// careful: active texture selector disturbed!!

// after DSA
glBindMultiTextureEXT(GL_TEXTURE2, GL_TEXTURE_2D, texobj);
glEnableIndexed(GL_TEXTURE_2D, 2);

// before DSA
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(x,y,z);
glScalef(s,s,s);
/// careful: matrix mode selector disturbed!!

// after DSA
glMatrixLoadIdentityEXT(GL_MODELVIEW);
glMatrixTranslatefEXT(GL_MODELVIEW, x,y,z);
glMatrixScalefEXT(GL_MODELVIEW, s,s,s)

// before DSA
glBindBuffer(GL_ARRAY_BUFFER, bufobj);
glBufferSubData(GL_ARRAY_BUFFER, offset, size, data);
/// careful: array buffer selector disturbed!!

// after DSA
glNamedBufferSubDataEXT(bufobj, offset, size, data);

I hope you agree this is "nicer". There's no implicit assumption about how some state selector was left with the DSA versions.

In the examples above, it is easy to see what the selector was just set to, but in general, you may have to look back many lines to know how the selector was left. Or you may simply not know.

Anyone who has been "burned" (see my OpenGL pitfalls article) by some function you called making OpenGL calls that changed a selector and then found that your subsequent OpenGL calls when you return are updating the wrong texture/matrix/buffer whatever will really appreciate DSA.

Functions that manipulate selector-controled OpenGL state can try to be defensive by save/change/restore'ing the selector, but this leads to slow code, particularly with today's multi-threaded OpenGL drivers where glGet* queries are relatively more expensive. Consider:

// before DSA
GLint savedTex2D;
glGetIntegerv(GL_TEXTURE_BINDING_2D, &savedTex2D);
glBindTexture(GL_TEXTURE_2D, texobj_to_modify);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
if (texobj_to_modify != savedTex2D) glBindTexture(GL_TEXTURE_2D, savedTex2D);
// Safe from leaving texture binding changed, but slow

// after DSA
glTextureParameteriEXT(texobj_to_modify, GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);

Also DSA makes display lists much more reliable and easier for drivers to optimize. Consider the following:

// before DSA
glActiveTexture(GL_TEXTURE2);
glNewList(1, GL_COMPILE);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, 45);
glEndList();
// now much later (active texture is likely different from when the display list was called), call it
glCallList(1); // what texture unit got enabled/bound??

// before DSA
glNewList(2, GL_COMPILE);
glActiveTexture(GL_TEXTURE2);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, 45);
glEndList();
// now much later call it
glCallList(2);
// Careful: active texture unit just changed after glCallList!

// after DSA
glNewList(3, GL_COMPILE);
glBindMultiTextureEXT(GL_TEXTURE3, GL_TEXTURE_2D, 45);
glEnableIndexed(GL_TEXTURE_2D, 3);
glEndList();
// much much later call it
glCallList(3);
// Nice: reliably enables/bind texture unit 2 to texobj 45; active texture selector not disturbed

When display list 1 is compiled, the intent from looking at the code is that 2D texture unit 2 is going to be enabled and bound to 45. Unfortunately, it is what glActiveTexture is set to much later when glCallList happens that determines the texture unit affected!

Display list 2 fixes this by compiling the glActiveTexture command into the display list, but then after glCallList(2), the active texture unit has changed to GL_TEXTURE2. Surprise!

Display list 3 using DSA has no surprises. The display list is compiled to enable and bind texture object 45 to the 2D texture unit 2 and doesn't disturb or depend on the active texture unit. This is not just safer (meaning less opportunity for surprise) but makes it easier for the driver to optimize the display list since it is explicit texture unit 2 is being modified.

Today the DSA routines have an EXT suffix because they are an extension, but that goes away if/when the functionality goes into core OpenGL.

I hope this helps.

- Mark

bobvodka
08-13-2008, 03:00 PM
Erm, I find it vaguely unsettling that your first example includes "glEnable(GL_TEXTURE_2D);", something which those of us who have been using shaders haven't done for some time now and are treating it as a FFP relic...

Mars_999
08-13-2008, 03:00 PM
Thank you Mark, that was a nice run down, and I have to say I like this a lot, a lot better than what we have now, as I have been burnt by the glPushMatrix/glPopMatrix with GL_MODELVIEW and trying to keep track of how far you have pushed GL_MODELVIEW onto the stack. This is a very, very annoying bug to track down. So if this the replacement for the object model, I am all for it, still looks similar to current GL syntax and names to allow a novice GL coder to pick up and run with it, in a timely manner.

Chris Lux
08-13-2008, 03:00 PM
glBindMultiTextureEXT(GL_TEXTURE3, GL_TEXTURE_2D, 45);
glEnableIndexed(GL_TEXTURE_2D, 3);
i agree this is much nicer, and had to happen much earlier, but your example shows in my opinion some missing consistency.

why not something like this:

glBindTextureIndexed(GL_TEXTURE_2D, 3, _my_tex_obj);

Chris Lux
08-13-2008, 03:04 PM
Thank you Mark, that was a nice run down, and I have to say I like this a lot, a lot better than what we have now, as I have been burnt by the glPushMatrix/glPopMatrix with GL_MODELVIEW and trying to keep track of how far you have pushed GL_MODELVIEW onto the stack. This is a very, very annoying bug to track down. So if this the replacement for the object model, I am all for it, still looks similar to current GL syntax and names to allow a novice GL coder to pick up and run with it, in a timely manner.
with the 'streamlined' GL3.x all the stacks are gone and this is no issue anymore as you can do it very easy while not worrying about such things as matrix stack depths.

n00body
08-13-2008, 03:05 PM
Thanks for the examples Mark! You rock! :)

Now I just need to wait for GLEW to update.

EDIT:
Yes, I know I have to wait for drivers, etc first. But I can't "really" use the extension until GLEW supports it. :p

Korval
08-13-2008, 03:09 PM
why not something like this:

glBindTextureIndexed(GL_TEXTURE_2D, 3, _my_tex_obj);


Because he's showing how it works for the old fixed-function pipeline.

The extension is very nice, but it's critical failing is that it is written against GL 2.1, not 3.0 and certainly not 3.0-deprecated mode (not to say that this is Mark's fault). So not only does it not cover GL 3.0 functionality (vertex array objects, etc), it covers a lot of functionality that is deprecated.

Because of both of those, I don't expect to see widespread implementation (nVidia and ATi). The most we can hope for is that GL "3.1" will adopt the basic principle, but applying it to new functions and not applying it to deprecated ones. Oh, and it would also deprecate the old way of setting these values.

Chris Lux
08-13-2008, 03:13 PM
why not something like this:

glBindTextureIndexed(GL_TEXTURE_2D, 3, _my_tex_obj);


Because he's showing how it works for the old fixed-function pipeline.
yeah, but even with shaders we have to bind textures to a certain texture unit (urgh, cg had the better idea here).

Korval
08-13-2008, 03:15 PM
yeah, but even with shaders we have to bind textures to a certain texture unit

That's something that needs to be fixed. Textures should be bound to shaders, not texture units.

Chris Lux
08-13-2008, 03:16 PM
That's something that needs to be fixed. Textures should be bound to shaders, not texture units.
add it to the list... but hey, that would break old code! so how high is the possibility to fix a bad decission here?

bobvodka
08-13-2008, 03:17 PM
yeah, but even with shaders we have to bind textures to a certain texture unit

That's something that needs to be fixed. Textures should be bound to shaders, not texture units.

I seem to recall that being part of Longs Peak as well you know...

HenriH
08-13-2008, 05:46 PM
Are we expected to see EXT_direct_access_state or something similar promoted to the OpenGL core in future versions, or is there any change for the originally intended Object Model like state system?

Korval
08-13-2008, 05:55 PM
We'll find out right after the BoF that's about to start in 5 minutes.

zed
08-13-2008, 06:55 PM
ok Bof or beach volleyball
http://img03.beijing2008.cn/20080809/Img214520718.jpg
bugger it, its only once every four years

dorbie
08-13-2008, 07:47 PM
These examples highlight the problem, even after the public furore, the supplied code references stuff that's deprecated.

Please get into a mindset that deprecated stuff is actually deprecated, I get the impression you don't care what that means......


Also DSA makes display lists much more reliable and easier for drivers to optimize.

WTF? Come on!

There's a huge amount of irony here giving a display list example as a use of the most significant GL3 extension. Why even write against that stuff, surely the additional effort is not insignificant, and for what? To make display lists even more useful right before you throw them away, which you apparently have no intention of doing....

Take deprecated seriously and start writing and thinking about ALL NEW FEATURES against the forward API.

Same with the matrix stuff, although I personally think at least generic matrix stacks and operations are genuinely useful and could have been preserved in some form in ES 2 (Phil Atkin's cups of coffee aside). This extension references matrix mode targets, it's like there's schizophrenia w.r.t. legacy deprecation. What the hell is ANY new extension doing encouraging and supporting the use of ANY deprecated feature, let alone a significant extension?

Shake off the cobwebs, kill all deprecated ESPECIALLY from the new stuff. I know it can be painful to abandon something crafted with such care, but if the stated intent of deprecated is honest then some of this was always stillborn.

If I can live without anything resembling a matrix stack then you can live without enhancing deprecated features.

There will be cake when you finally deprecate ALL the old stuff.

Korval
08-13-2008, 07:53 PM
These examples highlight the problem, even after the public furore, the supplied code references stuff that's deprecated.

I'm sure Mark didn't have much access to GL "3.0", or else it would have been written against GL "3.0".

Cyril
08-13-2008, 07:59 PM
I am at the BoF right now, and NVIDIA just made public their first OpenGL 3.0 compatible driver. It can be downloaded here :
http://developer.nvidia.com/object/opengl_3_driver.html

Have fun :)

Mars_999
08-13-2008, 08:11 PM
Wow, ARB_geometry_shader4 is listed as GL3.0? Does this mean GS are in 3.0?

Michael Gold
08-13-2008, 08:16 PM
No, it means its an extension to 3.0 (i.e. requires 3.0-capable hardware).

Korval
08-13-2008, 08:17 PM
i.e. requires 3.0-capable hardware

And requires a 3.0 context. Don't forget that.


I am at the BoF right now

So, give us a report. Was there a mea culpa (not that I'm holding my breath)? Or an acknowledgment that they did wrong and need to redress their inadequacies (again, not holding my breath)?

pudman
08-13-2008, 08:21 PM
Interesting that the driver does not expose GL_EXT_direct_state_access (according to the web page).

Korval
08-13-2008, 08:28 PM
Interesting that the driver does not expose GL_EXT_direct_state_access (according to the web page).


1: Beta.

2: DSA isn't a real extension. Not to "3.0". So it isn't something that one should add to a GL "3.0" implementation.

Also, for those who know their ARB screwup history, I look on DSA as like the old "EXT_render_texture" spec. That was an experimental spec (by nVidia) for doing render-to-texture. At that time, the ARB was mired in their attempt to implement a version of a part of the 3DLabs GL 2.0 proposal having to do with render surfaces, frame buffers, and that. EXT_render_texture caused such an uproar (particularly on this forum) that the ARB was forced to proceed with that proposal. It was basically a way for nVidia to force the ARB to get something done.

The result, about a year later, was FBO.

Yeah, it worked, but not with much support. And not very well. But this is likely the same thing. It is something for the ARB to start with in thinking along this idea.

PaladinOfKaos
08-13-2008, 08:43 PM
*sigh* only Windows drivers, of course. I guess that's all they can do until a GLX extension is released. Not that I expect there would be Linux in beta even if the extension was around. I get to wait an extra month or two.

3B
08-13-2008, 10:24 PM
Now that we have some beta drivers to play with, any ETA on current .spec files? The ones at http://www.opengl.org/registry/ just add enums, no new functions as far as I could tell.

(or even full .h files, probably easier to extract function signatures from those than from the full specification pdf)

Dan Bartlett
08-14-2008, 04:04 PM
Now that we have some beta drivers to play with, any ETA on current .spec files? The ones at http://www.opengl.org/registry/ just add enums, no new functions as far as I could tell.

(or even full .h files, probably easier to extract function signatures from those than from the full specification pdf)

+1 to this, need the core entry points listed somewhere.

Another query is whether GL_ARB_framebuffer_object (extension #45 according to gl.spec) is going to be added to the registry, or whether this is straight to core OpenGL, and isn't going to be added as extension (although is currently in NVidia beta drivers as an extension).

knackered
08-14-2008, 04:36 PM
*sigh* only Windows drivers, of course. I guess that's all they can do until a GLX extension is released. Not that I expect there would be Linux in beta even if the extension was around. I get to wait an extra month or two.
You should consider yourself lucky you get linux drivers at all. Linux drivers should be at the very bottom of their list of priorities. As you're probably aware, very few people actually use linux. Therefore very few of their customers are linux users, therefore the money is not there for linux effort. Personally, I resent subsidising linux with my hard earned money.

Rob Barris
08-14-2008, 05:11 PM
Now that we have some beta drivers to play with, any ETA on current .spec files? The ones at http://www.opengl.org/registry/ just add enums, no new functions as far as I could tell.

(or even full .h files, probably easier to extract function signatures from those than from the full specification pdf)

+1 to this, need the core entry points listed somewhere.

Another query is whether GL_ARB_framebuffer_object (extension #45 according to gl.spec) is going to be added to the registry, or whether this is straight to core OpenGL, and isn't going to be added as extension (although is currently in NVidia beta drivers as an extension).

This is the plan. As you know Khronos (ARB) specs have a 30-day review/cooldown period. The vast majority of material described (GL 3.0 core spec, 3.0 extension pack, 2.x extension pack) all made it under the wire for SIGGRAPH because they were submitted for the 30-day review around Jul 11th - thus able to be released this week after the promoter vote passed.

ARB_framebuffer_object had a few additional issues that had to be cleaned up, and it has been submitted, but it was a week or two later. Thus, there is a small lag before it too will appear in the registry.

To go back to the original question - this functionality is both extension and core. It is core in 3.0, and it is an extension for 2.x.

H. Guijt
08-15-2008, 01:22 AM
You should consider yourself lucky you get linux drivers at all. Linux drivers should be at the very bottom of their list of priorities. As you're probably aware, very few people actually use linux. Therefore very few of their customers are linux users, therefore the money is not there for linux effort. Personally, I resent subsidising linux with my hard earned money.

You should consider yourself lucky you get OpenGL3 at all. OpenGL3 should be at the very bottom of their list of priorities. As you're probably aware, very few people actually use OpenGL. Therefore very few of their customers are OpenGL users, therefore the money is not there for OpenGL effort. Personally, I resent subsidising OpenGL with my hard earned money.

Looks familiar?

The reason to use OpenGL, for a great many people, is because it is cross-platform. If that were to disappear as well, why not simply stick with DirectX?

knackered
08-15-2008, 04:30 AM
i'd always assumed its because direct3d has no quad buffered stereo and no swap groups/genlocking. The entire Quadro line of cards would be impossible if it weren't for OpenGL. As I've said before, if d3d had these features I wouldn't be here having this conversation now. I'd be using d3d with all of its tool chain.
There's also the control aspect - they need an alternative to MS which they have control over.
None of this has anything to do with linux - that's just wishful thinking.
Having said that, because it seems to be impossible for these features to be implemented in Vista (and obviously OSX), linux may become the only option for visualisation in the next few years. That's unless it's impossible in X then, what with all this compiz fusion crap.

babis
08-15-2008, 04:44 AM
quad buffered stereo, swap groups/genlocking => VR => linux-powered clusters (for more .. hairy VR apps). Linux support won't fail that easily for NVidia, at least for the quadro cards.
And when NVidia plans for many-GPU clusters to enter the heavy-duty computation industries, they can get a fair amount of money (lotsa cards & support). So I think they won't ignore at all OpenGL support in linux.

knackered
08-15-2008, 05:21 AM
clusters are not unique to linux y'know, babis!
all that is perfectly possible on MS Windows. I know, I do it.
so you were talking about linux....pray continue.

babis
08-15-2008, 05:48 AM
Perfectly possible,yes. But, what about performance (main reason to use clusters after all)?? :D
And if I'm correct the percentage of windows clusters in the vr/viz market is still pathetic. You're probably one of the few! ;)
Anyway, nobody can just say 'very few linux - opengl users', forgetting all viz industry / research / academics. Support for linux will probably come (ofc as always later than windows).

Overmind
08-15-2008, 07:50 AM
Personally, I resent subsidising linux with my hard earned money.

And I resent subsidising windows with my hard earned money. Well, we all live in an imperfect world ;)

PaladinOfKaos
08-15-2008, 08:29 AM
I thougth NVIDIA used a unified architecture, sharing state-management amongst all the drivers. If that's the case, then once the features are in the Win32 driver, the linux driver should be a matter of adding entrypoints, and making any modifications to the kernel module that are necessary. Since most (all?) of the GL3 features were implemented by NV in extensions, I can't see why they would need to make any major kernel-mode changes. That leaves implementing the entrypoints, which should be trivial.

Besides, NVidia didn't have a very long delay when they released SM 4 stuff, IIRC. They're pretty good about linux support. I think we're waiting on the ARB to get the GLX extension finalized. Any chance someone from NV and/or the ARB could comment on that?

The GLX extension isn't just for Linux - NVIDIA also supports Solaris (which, from what I understand, is still used for all sorts of visualization stuff) and FreeBSD. As others have pointed out, these are the workstation platforms. Consumer-level desktops may have pathetic *nix ratios, but visualization applications and workstations are pertty well dominated by Unix and its derivatives. IIRC, Dreamworks uses Linux workstations.

PkK
08-15-2008, 09:17 AM
You should consider yourself lucky you get linux drivers at all. Linux drivers should be at the very bottom of their list of priorities. As you're probably aware, very few people actually use linux. Therefore very few of their customers are linux users, therefore the money is not there for linux effort. Personally, I resent subsidising linux with my hard earned money.

Free software developers will do fine with specs and write free drivers.

Philipp

PaladinOfKaos
08-15-2008, 09:25 AM
Free software developers will do fine with specs and write free drivers.

Philipp


Unfortunately, the DRI stack really isn't that great. Hopefully Gallium will improve that, but even new driver development (RadeonHD and nouveau) is still taking place on the old DRI.

knackered
08-15-2008, 05:04 PM
Perfectly possible,yes. But, what about performance (main reason to use clusters after all)?? :D
what do you mean? are you talking about low level (gl command stream) clustering?
if that's the level you're working at, then you ain't never gonna get the performance I get with a proper clustered scenegraph. The platform is largely irrelevant when you work at a logical level.

babis
08-16-2008, 02:27 AM
if that's the level you're working at..

No no, you misunderstood me, I actually don't have *the slightest idea* about working on clustering, I've just worked using them transparently. I mentioned performance because of the vast usage difference of clustering on windows vs linux-based platforms. My deduction might be wrong, but the statement is a fact :)

knackered
08-17-2008, 09:47 AM
what vast difference? what statement is a fact?
it's safe to say that if you've worked with render clustering transparently then you're not getting anywhere near the performance you should be getting.
Off topic, anyway.

Rubens BR
02-21-2009, 03:35 PM
A very funny stuff, unfortunately true...
http://www.youtube.com/watch?v=sddv3d-w5p4
Spread it!!!

Sledge Hammer
03-03-2009, 03:55 PM
I'm almost a newbie to graphics programming, although I've done a few things with opengl some years ago mostly under Linux for some assignments at uni.

Now I wanted to start graphics programming again and I was quite confident, before I read this thread.

I'm still using XP. In fact I've just set it up newly after getting new hardware, installed everything again, compilers, SDKs, other applications, all I need and it took me a day.

I saw that OpenGL 3.0 drivers are ready and that Nvidia is even supplying some nice extra tools, so I was really convinced that jumping straight into OpenGL3.0+shaders and not bothering about depreciated techniques would be the right thing to do now. My thinking was that OpenGL 3.0 is on par with DirectX10, works on my system, is available on Linux etc., is future-proof, so perfect for me.

But now I'm really worried. Not so much about certain special features itself, since I don't have the knowledge to use them right now anyway, but I'm worried about the future. I mean, learning a complicated API and getting pro with it takes a lot of time and if I decide to do it and also write lots of code, I surely ask myself, if it's worth the effort or if I'm doing/learning something that will be of not much use to me tomorrow.

So if I read this thread or that some graphics engine developers, who always had OpenGL support included, are not planning to have it in their newest engines, it really makes me think, if trying to get into OpenGL currently is a wise thing. I don't want to become an expert in something that has become a sinking ship.
I don't say that OpenGL has become that already, but I'm really worried about that possibility and it makes me hesitate to start with it.

This is not about me ofc, but I think it's generally a bad sign to everyone, who has to make a choice between OpenGL and DirectX9,10,11. They are probably all worried after reading this.

So in my opinion, if some companies are slowing the development of OpenGL so much down that it stays behind Direct3D and hardware development, they are killing OpenGL in the long term just to save some work on their existing code base !
I wish, I knew what everyone in that commitee is thinking and if they are really willing to do what's necessary to keep that API competetive or if they are just trying to play out some time for the people with legacy code.
I think that's really an important question for someone, who's just making the decision for or against OpenGL.

Simon Arbon
03-03-2009, 09:57 PM
it really makes me think, if trying to get into OpenGL currently is a wise thing. I don't want to become an expert in something that has become a sinking ship.OpenGL could never be a sinking ship, it is supported by nearly every operating system and in most cases it is the primary 3D graphics API.
The ES version is becoming more and more widely supported across all sorts of PDA's and cell phones.
I would be more worried about DirectX now that windows has begun its slow decline, especially with the global economic crisis making people less likely to waste hundreds of dollars upgrading windows every few years when they can have a higher performance OS for free.


Learning a complicated API and getting pro with it takes a lot of time and if I decide to do it and also write lots of code, I surely ask myself, if it's worth the effort or if I'm doing/learning something that will be of not much use to me tomorrow.The most complicated part is writing the shaders, and GLSL, HLSL and Cg are all very similar so if you learn one then you will be able to write shaders for any API.
The OpenGL API itself is actually quite simple, once you throw away all the depreciated stuff.

Most of the complaints in this thread relate to the way the ARB suddenly stopped all public communication and then released a spec that was nothing like what most people were expecting.
The restructuring of the API to use immutable objects, missing from the 3.0 spec, is expected to be done in the upcoming 3.1 release.
When combined with the recently released OpenCL spec, which adds the capabilty to manipulate OpenGL buffer data any way you can imagine, OpenGL will quickly surpass DirectX11 in technology and ease of use.

To bring yourself up to date you should first learn to use VBO's to transfer vertex data to the graphics card and then try a few simple shader examples to learn how to use them.
Stay away from anything that mentions glBegin or PBO's or anything else listed as depreciated.

martinsm
03-04-2009, 01:48 AM
PBO ar not deprecated and it is very normal to use them for various tasks. For example - asynchronous data transfer from/to buffers.

pudman
03-04-2009, 07:39 AM
The restructuring of the API to use immutable objects, missing from the 3.0 spec, is expected to be done in the upcoming 3.1 release.

I can't see why one would expect that. If we learned anything from 3.0 it's that GL expectations are fantasies.

3.1 will be a good test of whether the ARB will actually be productive in the future. They have made no promises, no guarantees, so we can all be pleasantly surprised if they remove deprecated functionally, incorporate more extensions into core, or implement any suggestions found on this forum.

It's been almost 7 months since the release of the 3.0 spec. No news in GL land has a track record of not being good news.

Don't Disturb
03-04-2009, 09:00 AM
Sledge Hammer - you'll find developing in Direct3D to be a far more pleasant experience. Only use OpenGL if you need Linux support or you need Direct3D 10-level capabilities in Windows XP.

Sledge Hammer
03-04-2009, 09:46 AM
Stay away from anything that mentions glBegin or PBO's or anything else listed as depreciated.


Ok, today I tried to inform myself a bit better, googled for 'Modern OpenGL' or something like that and found some slides from Siggraph Asia 2008 presentation. The subject was 'Modern OpenGL: Its Design and Evolution.' by Mark J Kilgard, Nvidia and Kurt Akeley, Microsoft.

Those slides were quite interesting. Then I came to the topic of display lists. glCallList was mentioned as OpenGLs most powerfull command and the way to go for performance reasons. Mark J Kilgard says that that the current mechanism is just not flexible enough and needs to be enhanced. He gives some hints, what needs to change.
Ok, but in OpenGL3.0 displaylists are marked as depreciated. Makes me wonder a bit, how this decision was made. I would have thought that especially the graphics card / driver developers know best, where the bottlenecks/flaws are in the current specification and what is needed to meet the every increasing demands for more detail, realism, graphics power.

I mean, as I see it, the future direction needs to be chosen by those, who need that graphics power/new features in the future and those, who have to build the hardware for it and know best, how the command processing etc. needs to be done to get that high performance and avoid bottlenecks.

Like I say, strange, if some hardware developer says, we need to go this way and enhance it, but the thing gets marked 'depreciated'. Sounds like a communication problem to me.

Simon Arbon
03-04-2009, 05:36 PM
PBO ar not deprecated and it is very normal to use them for various tasks. For example - asynchronous data transfer from/to buffers.
Yes, i should have been clearer.
I just meant that someone starting to learn OpenGL should stay away from tutorials and and examples that use PBO's, and just use FBO's.
PBO's do have there uses, but for specific purposes that should wait until the learner is comfortable with the core API and ready to move on to more advanced topics.


glCallList was mentioned as OpenGLs most powerfull command and the way to go for performance reasons.The reason that display lists was marked for depreciation is that most of the reasons for using them will be relaced by VBO's and immutable objects (if they are introduced in OpenGL 3.1)
They do still have a performance advantage, on NVIDIA hardware especially, for batching draw calls for static geometry.
They were marked depreciated simply because NVIDIA was out-voted by the other companies on the ARB that dont want them.
This does not mean that they will definately be removed, and even if they are, there will probably be new functions that allow you to get the same result a different way, and NVIDIA can still provide display lists as an extension if they want.

OpenGL is in a period of flux at the moment, this is not a bad thing, even if it does seem a bit chaotic.
This is all part of a process that will hopefully make OpenGL the best API it can possibly be.
Most of us here are waiting on the 3.1 spec to discover what direction the future OpenGL will actually take, but that does not mean that you need to wait until then to start learning OpenGL.
VBO's, FBO's and shaders will definately be core components of modern OpenGL, so if you learn these first then you will be ready for the future.
When writing your programs, structure them so that commands that setup the rendering state for each VBO, FBO and shader program are in a separate function, then record each to a display list and use glCallList to change the rendering state.
This should make it easier to convert when 3.1 is released (each display list converted to an immutable object and each function becoming the initialisation routine for that object)
When 3.1 does come out you will probably need a good matrix-math library to replace the old OpenGL matrix functions.

martinsm
03-05-2009, 01:26 AM
PBO = Pixel Buffer Object, and you should not avoid them. You can not replace them with FBO.
You probably meant PBuffer - platform specific way of rendering to ofscreen buffer.

Rob Barris
03-05-2009, 12:15 PM
Hi Simon -

The general idea for 3.1 and beyond includes a few components:

- doing a more consistent and timely job of exposing hardware functionality
- reducing and removing deprecated functionality
- more strongly aligning on a set of common required features to slim down the extension forest
- more strongly schedule-driven process for updating the API regularly
- responding to specific ISV needs where possible
- providing developer guidance on migration to newer API revisions as early as possible.
- keeping the IHV / implementation work pipeline full - notice that 3.0 implementations are now arriving, and 3.1 / 3.2 specification efforts have been underway for some time already in parallel.

Jan
03-05-2009, 02:40 PM
Is there already some expected release date for 3.1 ? In what time-frame might we expect it? You mention very often, that there is supposed to be a steady time-frame for each release from now on, but to be honest, it doesn't sound, as if the ARB would reach this particular goal with 3.1 either.

Jan.

Rob Barris
03-05-2009, 03:51 PM
There is a pretty well defined release window for 3.1, but no details I can share yet.

Simon Arbon
03-05-2009, 04:44 PM
Can we at least have a hint on whether it:

Uses immutable state objects for render state switching and lpDrawArrays for geometry,
OR still uses display lists for either,
OR something completely different.

Lord crc
03-05-2009, 05:40 PM
There is a pretty well defined release window for 3.1, but no details I can share yet.


t_{to release} \in (0, \infty) is pretty well defined, doesn't help us much tho :P

Rob Barris
03-05-2009, 08:34 PM
There is a pretty well defined release window for 3.1, but no details I can share yet.


t_{to release} \in (0, \infty) is pretty well defined, doesn't help us much tho :P

Sure, let's start with that and then we can play the binary search game until it's more reasonable. Go ahead and lop off the back 99.999% of it to get started.

ZbuffeR
03-06-2009, 03:01 AM
Infinity is well defined as 100 years in computer world :D
100*(1-0.99999)*365*24 = less than 9 days

So : after or before sunday the 2009-03-15 ?

Rob Barris
03-06-2009, 11:19 AM
I guess if you get to set the meaning of "infinity", I can set the meaning of "%"..

Jan
03-06-2009, 12:12 PM
I heard for mathematicians infinity starts roughly at 12.

Brolingstanz
03-06-2009, 12:39 PM
Are we talking "infinity" or "finite and unbounded"? A sphere is finite but its surface is unbounded (lots of wiggle room there ;-)).

Jan
03-06-2009, 05:39 PM
I really don't bother much about such minor details ;)


What i do bother about VERY MUCH: OpenGL has become a major pain in the ass regarding vertex attributes and i wonder, whether there is anything in the pipeline to remedy this (though i fear, if at all, it will become worse).

Back in "the days", if i wanted to render something, i set a glVertexPointer, glNormalPointer and glTexCoordPointers. In the shaders i simply used the built in variables (gl_Normal, gl_MultiTexCoord*) and it was mostly fine.

Well, glTexCoordPointer is deprecated, and anyway i have int-attributes and other stuff, such that i prefer to use generic vertex attributes.

The problem with these is, i need to query the CURRENT shader, which bind-point a certain attribute is attached to. That actually means, that every time that i change a shader, i would need to reevaluate where to bind my vertex-arrays to. Some shader might not use one of the available attributes, such that some bind points change.

That means there is for every shader/vertex-buffer combination a certain attribute mapping. Sure, i CAN handle this, i can even make this "fast", using VAOs (though they seem to be buggy yet, and not speed up anything) which i store for every such combination.

But only because it's possible, doesn't mean it's not awkward...

What i would really like to do, is simply to tell the API "here is my shader" and "here is a VBO that consists of the attribute arrays 'position', 'texcoord', 'tangent' and 'whatever'" and have the driver sort out any mappings internally. Just instead of "glGetAttibLocation ('bla') glBindAttibute (location, ..)" (at EACH shader switch) i'd rather say ONCE "glBindAttributeByName ('bla', ..)" and have the driver take care for the rest, until i deactivate the array again.


But i fear i have to implement such an abstraction myself someday.

Jan.

skynet
03-06-2009, 06:31 PM
Why don't you use your own fixed scheme of (semantic) attribute stream to attribute location? You can enforce this using glBindAttribLocationARB().

Jan
03-06-2009, 07:30 PM
Hm, didn't know about that function. Well, it is a start. But if i see this right, i need to have the mapping for all attributes that i will ever use fixed before i link any shader, that uses it. So this one looks like the complete opposite of what i am doing right now. It just replaced one inconvenience with another. I will have to think about this a bit more. But thanks for the info.

Jan.

Stephen A
03-06-2009, 09:29 PM
I have created an abstraction that works exactly like Jan's proposal. Since this is C# code, it is possible to use reflection and discover the mapping automatically at runtime, i.e. if the shader has a 'Position' attribute and the VBO has a 'Position' field, they will be bound automatically.

I would very much like to see something like "glBindAttributeByName" in a future OpenGL version. Both current methods (query locations on every shader change or use fixed locations only) are problematic, awkward to use and suboptimal.

martinsm
03-07-2009, 05:25 AM
Stephen A: you don't need to query locations on every shader change. Do it only when linking occurs. Vertex attributes change locations only on shader linking.

Brolingstanz
03-07-2009, 05:28 AM
Any chance of getting an unofficial preview of what's in store for 3.2?

Stephen A
03-07-2009, 07:01 AM
Stephen A: you don't need to query locations on every shader change. Do it only when linking occurs. Vertex attributes change locations only on shader linking.
Never claimed otherwise :)

I build a map of attributes when each shader is linked, then create (a cached) map of attributes for each VBO. On rendering, I use these maps to bind the correct locations.

If you think about it, this is a roundabout way of binding attributes: The driver knows the attribute names (you can query them) and also knows the attribute indices (you can also query them). Shadowing this information in the aforementioned maps is not optimal; it would be simple to expose a glBindAttributeByName and avoid this overhead, both in code and in memory.

Also note that this trick is all but impossible to pull off in metadata-poor C or even C++. You are more or less forced to use fixed attribute locations, which are a performance minefield.

skynet
03-07-2009, 08:56 AM
...and having the driver to do a string-lookup every time you do a glAttribPointer("my_attribute", ...) call is not a performance minefield? And in which way are fixed attribute locations a performance minefield? The only "problem" they provide is that you may have more _possible_ streams than available attribute slots. But this simply boils down to the finding that you don't want to bind them all at once anyway. You can allow some kind of "aliasing", between attributes that are unlikely to be used together, in your own fixed scheme.

Btw, we're getting slightly off-topc ;-)

Stephen A
03-08-2009, 02:22 AM
The driver already maintains a hashtable of attribute names <-> locations, so I don't see how this is a performance minefield.

When you force your own, fixed attribute locations, you may be putting the driver out of the fast path. Check out this (http://www.opengl.org/sdk/docs/tutorials/ClockworkCoders/attributes.php) (under the custom vertex attributes section) and this (http://ogltotd.blogspot.com/2006/12/setting-vertex-attribute-locations_09.html).

The following quote from the second link is especially damning:

It is possible to assign the same location to multiple attributes. This process is known as aliasing, and is only allowed if just one of the aliased attributes is active in the executable program. HOWEVER the implementation is not required to check for aliasing and is free to employ optimizations that only work in the abscence of aliasing.
That's the definition of a minefield.

Brolingstanz
03-08-2009, 07:01 AM
Fortunately for me I only use 1 or maybe 2 vertex formats tops for everything (but then I don't have to deal with anything but my little world).

skynet
03-09-2009, 06:31 AM
Binding VBOs to attributes is one of the most often called functions when rendering a frame. That can add up to several thousand times per frame. These function calls must be as fast as possible. Doing a hash-lookup followed by a string-comparison _will_ be slower than just using an index.

Just try it for yourself. For example, remove any caching of uniform-locations in your renderer and do a glGetUniformLocation() every time you want to update a uniform.

All the problems related to aliasing of built-in attributes are essentially gone since the last fixed function hardware went away. The hardware has to be able to accept any kind of attribute data at any attribute slot and can't make any assumptions about the data and format that is bound to certain attributes.

I never had any problems with fixed attribute locations, maybe because I never made (ab-)use of aliasing attributes. And with the introduction of GL3.0 / GLSL 1.30 the whole aliasing thing has become a non-issue anyway because there are no built in attributes anymore.

Btw, I find the "Clockworkcoders Tutorials" kind of funny. It shows buggy code and a "misuse of color". What is any beginner supposed to learn from that?

Stephen A
03-09-2009, 10:15 AM
I'm pretty sure that VAO is supposed to take care of the binding costs? (Disclaimer: haven't read the VAO specs yet)

In any case, I dislike the notion of using fixed attribute locations, in the same way that I dislike the notion of using OpenGL handles without calling glGen*. Yes, you can do it, but it's something that shouldn't even be allowed in a well-engineered API.

Ilian Dinev
03-10-2009, 01:32 PM
I really don't bother much about such minor details ;)

"glBindAttributeByName ('bla', ..)" and have the driver take care for the rest, until i deactivate the array again.
But i fear i have to implement such an abstraction myself someday.
Jan.
If you want, I can write the code for you, to let you use OpenGL like this:
http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=254176#Post254176
Then, Khronos can simply bring in-code GLSL semantics, bound to integer-values, like the Cg and HLSL compilers do - and let us all be happy.

Jan
03-10-2009, 02:39 PM
That is a very nice offer, but i doubt it could be written in an easy plug-and-play style, so i would need to do it myself, anyway. But if you like writing a tutorial, go ahead, there are certainly many people interested to see how to use GL3 (or modern GL in general), myself included (if i have learned one thing over the years, it is that there are always things you have never heard about).

Jan.