PDA

View Full Version : The ARB announced OpenGL 3.0 and GLSL 1.30 today



Pages : 1 [2] 3

Yfrwlf
08-13-2008, 10:27 PM
So, perhaps they need to either rename this release as OGL 2.2, OGL3-alpha/beta, or hurry up and slap in the missing stuff into OGL3.1 and pretend 3.0 didn't exist really.

The other option is to fork OGL. It is open source software, after all. If Khronos wants to then merge back in those features, so be it, but certainly no one needs to wait on them to get their act together. They should have been more open to community suggestions to begin with it sounds as it's developers that need to be catered to here as best as you can, you know, those whom your API directly effects, so as with anything open source this communication is critical.

It sounds like OGL3.0 is an improvement, but that there is clear room to ramp up the improvements and get them out the door. OGL should be ahead of DX10. It's open source. It literally has a world of help waiting to tackle it's problems, so I hope that will quickly happen to it's remaining challenges now, if not by Khronos then by a different group, especially now that these issues are so exposed/hated.

Nicholas Bishop
08-13-2008, 10:32 PM
The other option is to fork OGL. It is open source software, after all.

No it's not.

Korval
08-13-2008, 10:34 PM
It is open source software, after all.

For the last time, OpenGL is not open source! The term "Open" in this case referring to the fact that it is an open standard defined by a standards body (rather than one entity, ala Direct3D or IrisGL).


They should have been more open to community suggestions to begin with

That would only be true if the ARB were unaware of the fact that we wanted Longs Peak. Which is errant nonsense. The ARB went dark for a year because they knew the uproar that it would cause to release "not-Longs Peak". And they waited until now to tell us so that they could at least stick a spec under our noses and get a few of us to say, "Well, at least we have a spec with some core features."

You think nVidia put out those beta drivers for their health? No, they did it because at least a few of us will "just shut up and code" if they give us the means to do so.

Brianj
08-13-2008, 10:50 PM
If Khronos needs 6 months (http://www.theregister.co.uk/2008/08/13/opengl_firestorm/) to get the next revision out, and promises to keep us in the loop about what's going on -I'll be a patient little boy.

Korval
08-13-2008, 10:57 PM
Trevett also personally hopes OpenGL 3.1 can be delivered in six months' time

Emphasis added. I wouldn't be too... hopeful if all the Khronos head honcho can do is "hope".

Brianj
08-13-2008, 11:17 PM
If they need a little more time, that's fine. The worst part about the wait wasn't the wait itself - it was not knowing anything. If communication is an area where Khronos is willing to be a little more flexible, they can take all the time they want to get things right IMO. Leaving the community in the dark was just wrong, and I hope this is a issue gets brought up at Siggraph.

Zengar
08-13-2008, 11:40 PM
BTW, I was reading the spec, and do I understand it correctly that it is now possible to use feedback loops with FBOs when the filtering mode is set to nearest (4.4.3)? It is a shame that multisample textures are still not supported...

NeARAZ
08-14-2008, 12:28 AM
3/ The bytecode is smaller and faster to load (especially for those that have hundreds of shaders)
...
5/ The hardware vendors only need to write the back-end compilor.
6/ The load or run-time compilation will be slightly faster.
7/ As the bytecode has been pre-optimised you will get a better assesment of which hardware it will run on.


FYI, don't underestimate the work actually involved loading from byte code (GL vs DX, how much time do you actually think is taken in parsing source to tokens [GL only] vs the rest of compile [DX and GL]). And don't take my word for it, about this from,

http://ati.amd.com/developer/cgo/2008/Rubin-CGO2008.pdf
An interesting presentation, thanks.

However, I still think high level bytecode shaders do make a lot of sense. DX9 bytecode is somewhat too low level, sure. I'm thinking more high level bytecode (ala LLVM level), where the front end compiler does preprocessing, lexing, parsing, dead code removal, and basic optimizations (without assuming that registers are 4-wide floats). Just optimize away no-ops and eliminate some common subexpressions.

Some of the issues I've had with GLSL that are purely in front-end compiler code (all on OS X, because on Windows we only care about D3D9):
* When used in preprocessor macro, (a*b) would produce correct result, whereas (b*a) would produce zero. A was a texture lookup, b was a result of calling a function that does some computation. This was on OS X 10.4.x, incorrect behavior was on PPC machines (Intel was correct).
* Array initializers don't work on OS X 10.5.4 (that's latest drivers).
* Some sequences of #if .. #elif .. #endif were not working on OS X 10.4.x, #elif had to be changed to negative #if test.
* Multiplies by 1.0 sometimes are not optimized away. Lots of multiplies by 1.0 result where some certain shader combinations replace complex logic with basically a no-op (e.g. multiplying by shadow term - for non-shadowed shader, replace with *1).
* Accidentally declaring unused uniform samplers in vertex shader code would drop to software rendering (why accidentally? because we put vertex & fragment shaders into one file, with preprocessor guards around two main functions).
* Including functions that are never called would overflow native instruction limits and drop to software or fail compilation.
* Some others that I forgot.

All the above are preprocessor bugs, dead/unused code not being removed, no-ops not being removed.

Sure, a driver internally does a lot of extra compiling, optimizations and whatnot, and it has plenty of chances to be buggy in there. So it's not like you'd be bulletproof against driver bugs. And yes, that actual compiling takes time, so it's not like loading shaders would suddenly be lightning fast.

However, splitting out front-end does leave less chances for the driver to be buggy (good!), and makes shader loading somewhat faster (good!). That's all I want.

Korval
08-14-2008, 12:41 AM
it is now possible to use feedback loops with FBOs when the filtering mode is set to nearest (4.4.3)?

What is a "feedback loop"?


It is a shame that multisample textures are still not supported...

You say that as if they would ever be supported.


When used in preprocessor macro, (a*b) would produce correct result, whereas (b*a) would produce zero. A was a texture lookup, b was a result of calling a function that does some computation. This was on OS X 10.4.x, incorrect behavior was on PPC machines (Intel was correct).

Depending on what "b" is, that may be perfectly correct behavior. IE, calling "b" may put the graphics card in one of those states that makes derivatives unavailable.

Also, what makes you say that this has anything to do with the front-end? The fact that it was in a macro has nothing to do with what the backend compiler does with it being one way vs. another.


Multiplies by 1.0 sometimes are not optimized away. Lots of multiplies by 1.0 result where some certain shader combinations replace complex logic with basically a no-op (e.g. multiplying by shadow term - for non-shadowed shader, replace with *1).

The specification doesn't guarantee that multiplies by 1.0 will be optimized away.

dorbie
08-14-2008, 12:42 AM
Trevett also personally hopes OpenGL 3.1 can be delivered in six months' time

Emphasis added. I wouldn't be too... hopeful if all the Khronos head honcho can do is "hope".

What do you want him to do? Tell all the members how to vote?

If he said anything other than "hopes" you'd have immediately called B.S.

Zengar
08-14-2008, 12:48 AM
What is a "feedback loop"?


feedback loop=rendering to a texture you read from (so having an active texture bound as a render target).

Simon Arbon
08-14-2008, 12:50 AM
NVIDIA OpenGL 3.0 Beta driver now available at:
http://developer.nvidia.com/object/opengl_3_driver.html

NVIDIADisplayWin2K(177_89)Int.exe Installed driver version 6.14.11.7789 successfully on my 8800GTX.

I now have new extensions:
GL_ARB_draw_instanced
GL_ARB_half_float_vertex
GL_ARB_framebuffer_object
GL_ARB_geometry_shader4
GL_ARB_texture_buffer_object
GL_ARB_vertex_array_object

You also need the new version of nvemulate.
NOTE: I had to switch to an administrator account to enable GL 3.0, it doesnt save the setting from a user account.
This added the following to WGL_ARB_extensions_string:
WGL_ARB_create_context

Korval
08-14-2008, 12:53 AM
What do you want him to do? Tell all the members how to vote?

Yes. If he is actually in charge to some degree, if he has actual power, then he can dictate that they pass a 3.1 spec that removes the deprecated features by date X. And if they do not, then maybe he can fire them or impose sanctions on the ARB or something.


If he said anything other than "hopes" you'd have immediately called B.S.

No. The only screwup with regard to OpenGL he's presided over is the GL "3.0" one. The ARB has been screwing GL over for years know. If he's saying that they made a mistake and will move post-haste to correct it, then I would be willing to entertain the possibility that he was telling the truth.

But only to the degree that he had power over the ARB. If all he is is a figurehead, then his words would be of no value. Then again, if all he is is a figurehead, then his hopes have no meaning anyway.


feedback loop=rendering to a texture you read from (so having an active texture bound as a render target).

I'm fairly certain the spec stated that such things had undefined results if they were from the same texture level. And yes, I mean the "3.0" spec.

[edit] From the spec:


Special precautions need to be taken to avoid attaching a texture image to the
currently bound framebuffer while the texture object is currently bound and enabled
for texturing. Doing so could lead to the creation of a feedback loop between
the writing of pixels by the GLís rendering operations and the simultaneous reading
of those same pixels when used as texels in the currently bound texture. In this scenario,
the framebuffer will be considered framebuffer complete (see section 4.4.4),
but the values of fragments rendered while in this state will be undefined. The
values of texture samples may be undefined as well, as described at the end of the
Scale Factor and Level of Detail subsection of section 3.9.7.

Mars_999
08-14-2008, 12:56 AM
Ok, please clear this up, are VBO still here or did they get renamed to vertex array object? Because VBO's were apart of GL1.5

scratt
08-14-2008, 01:01 AM
I have a question..

I am working on a long term project. OpenGL based.
There is a fair amount of legacy code, and an awful lot that will rely on 'deprecated' features. Alongside that is a fair amount of 'cutting edge' OpenGL, which is heavy in shaders and Buffer Objects etc..

Launch for this title is going to be on hardware released 4Q this year, and beyond.

What would people here suggest I do in terms of preparation?

martinsm
08-14-2008, 01:03 AM
As far as I understood - VAO is for setting lots of VertexAttribPointers in one call (BindVertexArray). VBO's are still there.
Similary as lists you "record" VertexAttribPointers and later just bind VAO to automatically set previously stored vertex attribute pointers.

Korval
08-14-2008, 01:05 AM
Ok, please clear this up, are VBO still here or did they get renamed to vertex array object? Because VBO's were apart of GL1.5

VAOs do not replace VBOs. VAO's are state objects, representing all of the glVertex*Pointer calls that an application would use. So you can make a bunch of glVertex*Pointer calls, then store them in a VAO, and simply bind that VAO instead of making a half-dozen glVertex*Pointer calls and their associated overhead.


What would people here suggest I do in terms of preparation?

Rid yourself of the legacy cruft. Or rid yourself of OpenGL entirely. Either one is good.

The only legacy stuff I've got are glMatrix stuff (for shaders), and that's because I'm too lazy to make them real uniforms.

LogicalError
08-14-2008, 01:15 AM
So, any news? The BOF was yesterday, but i can't find any news about what was said anywhere...

scratt
08-14-2008, 01:20 AM
Rid yourself of the legacy cruft. Or rid yourself of OpenGL entirely. Either one is good.

Getting rid of OpenGL is not really an option. :)

I obviously need to read the spec in more detail - my apologies but I have not had time to do so yet - I was picking up some quite worrying stuff on a couple of mailing lists..

Most specifically that pretty much all the functionality of glLoadMatrix, attribute stacks et. al. and the fixed function pipeline is going. Cool, but this does kind of shoot a lot of quick prototyping in the foot.

Am I understanding this appendix in the spec. correctly?

Mars_999
08-14-2008, 01:24 AM
So were these VAO extensions under 2.1?

dorbie
08-14-2008, 01:38 AM
The other option is to fork OGL. It is open source software, after all.

No it's not.

The only people who can credibly fork OpenGL (and it'd have to be called something else) are a coallition of NVIDIA and AMD(ATI) and perhaps Intel. But we don't know if these are the primary culprits, it's certainly been excruciating getting them to agree on the just about any major feature in the past whether the differences were mostly cosmetic or profound.

D3D only advances because Microsoft dictates the interface and feature set. Every generation or two, one or other of the big vendors gets frustrated with the feature set & feels the pain. The ARB has had similar issues (e.g. glslang being forced on NVIDIA over Cg).

In reality a fork would propose a new API with maybe 2 or 3 voting companies on a possibly larger steering committee and it would probably mire down without binding credible technical arbitration (not a bunch of pipsqueaks out voting the 800lb gorilla). It's not just about the API it's about the drivers, it's MOSTLY about the drivers, and in this case it's about cross platform / cross vendor drivers.

To me it is amazing that you have these hardware companies with their APIs and now features effectively controlled by one operating system vendor that also ties that API exclusively to one OS. But they've made this bed for themselves over the years through their inability to work together and free themselves. I could speculate that there's a lack of understanding at the highest levels in each of these companies about the critical strategic importance of controlling your own API or at least not being beholden to an entity like Microsoft for it, the Vista DX-10 fiasco underscores this IMHO. It's probably way too late though, Microsoft is the devil they know, and the jury is in, long term a cross vendor graphics API benefits from dictatorshp once the dictator smartens up. During this process the hardware guys rotate through winning and losing the features cycle war.

dorbie
08-14-2008, 01:53 AM
Rid yourself of the legacy cruft. Or rid yourself of OpenGL entirely. Either one is good.

Getting rid of OpenGL is not really an option. :)

I obviously need to read the spec in more detail - my apologies but I have not had time to do so yet - I was picking up some quite worrying stuff on a couple of mailing lists..

Most specifically that pretty much all the functionality of glLoadMatrix, attribute stacks et. al. and the fixed function pipeline is going. Cool, but this does kind of shoot a lot of quick prototyping in the foot.

Am I understanding this appendix in the spec. correctly?

Yes, except that it may or may not go depending on the whim of the vendor, fully compatible "deprecated" stuff might be stinking up the API for years to come especially if they perceive it as an advantage to leave it lying around.

After this fuss hopefully it will be gone at 3.1.

To draw a single triangle you'll have to set up generic attribute arrays, name them in a vertex shader, write a vertex and fragment shader with linked registers passing between them and voilla, first light. To move meaningfully you'll need your own matrix library. In practice there will sample code and shaders plentifully available that show a basic xform pipeline for prototyping and matrix utilities libs will also abound.

So yes it will be trickier for novices, considerably so, but it will be purer and more generically programmable. The concept of the fixed function pipeline or any of it's named attributes won't have special status, although some operations will still exist, like clipping and depth test.

P.S. in some senses there will be less to learn, for example you'll learn about coding shaders, rather than 'special' state information to control a fixed function pipeline. So I think you'll need to be slightly more proficient at generic graphics to start, but there will be less to know and learn about the actual API & associated calls & tokens. Writing shaders is a rich area in itself, but you don't have to start complex.

H. Guijt
08-14-2008, 01:57 AM
NVIDIA OpenGL 3.0 Beta driver now available at:
http://developer.nvidia.com/object/opengl_3_driver.html

NVIDIADisplayWin2K(177_89)Int.exe Installed driver version 6.14.11.7789 successfully on my 88800GTX.


Question: will there be OpenGL3 support for older cards?

If there will not be any:

1. Is this for marketing or technical reasons?

2. How do the parties involved think this will help OpenGL3 adoption?

Chris Lux
08-14-2008, 02:02 AM
Question: will there be OpenGL3 support for older cards?

If there will not be any:

1. Is this for marketing or technical reasons?

2. How do the parties involved think this will help OpenGL3 adoption?

no, no support for <G80 class hardware. the GL3 spec clearly requires hardware features only present in current generation hardware.

NeARAZ
08-14-2008, 02:03 AM
When used in preprocessor macro, (a*b) would produce correct result, whereas (b*a) would produce zero. A was a texture lookup, b was a result of calling a function that does some computation. This was on OS X 10.4.x, incorrect behavior was on PPC machines (Intel was correct).
Depending on what "b" is, that may be perfectly correct behavior. IE, calling "b" may put the graphics card in one of those states that makes derivatives unavailable.
No dynamic branching was used there, no infinities, NaN or other stuff. One has sampling the regular RGBA 8 bit/channel texture, another was computing something (don't remember what, some lighting calculation - dot products and such).


Also, what makes you say that this has anything to do with the front-end? The fact that it was in a macro has nothing to do with what the backend compiler does with it being one way vs. another.
Because if I manually pasted in the preprocessor defines into actual code, all was working. It amazes me as well how a preprocessor could be that broken, but it was.


The specification doesn't guarantee that multiplies by 1.0 will be optimized away.
Sure. But I'd expect any decent compiler to optimize them away. This can be done safely in the basic optimizer (or is there something I'm missing?).

Anyway, all that I was saying is that I'd like the GLSL preprocessor, lexer, parser and basic optimizer to be separate from the driver, in a separate open library or tool. That would lift off some complexity from GLSL, and would make it a bit more stable and a bit faster to load.

It's just my wish, and in fact it's quite off-topic in the context of GL3.0 (this was never promised... unlike the cleanup of GL itself, which GL3.0 totally failed to deliver).

dorbie
08-14-2008, 02:16 AM
Question: will there be OpenGL3 support for older cards?

If there will not be any:

1. Is this for marketing or technical reasons?

2. How do the parties involved think this will help OpenGL3 adoption?


No.

1) For very sound technical reasons, OpenGL 3 raises the features bar especially in shaders and mandates support for new features so you don't have to worry about it being supported. You should no more expect to run OpenGL 3 on an old card than you can expect to run DX10.

2) It'll help it just great because this is *exactly* what OpenGL 3 is about, new features with guaranteed support and the prospect of a cleaner API. If you want to run in hardware on those old cards write OpenGL 2.0 code.

arekkusu
08-14-2008, 02:57 AM
So were these VAO extensions under 2.1?

Yes (http://www.opengl.org/registry/specs/APPLE/vertex_array_object.txt). It has been shipping on the Mac since 2002.

H. Guijt
08-14-2008, 03:32 AM
No.

1) For very sound technical reasons, OpenGL 3 raises the features bar especially in shaders and mandates support for new features so you don't have to worry about it being supported. You should no more expect to run OpenGL 3 on an old card than you can expect to run DX10.

2) It'll help it just great because this is *exactly* what OpenGL 3 is about, new features with guaranteed support and the prospect of a cleaner API. If you want to run in hardware on those old cards write OpenGL 2.0 code.

Ok, thank you for this confirmation.

For a developer it will only make sense to start using GL3, then, once the number of GL3-capable cards is sufficiently high. Given that DX10 adoption is also pathetic so far, this is clearly not intended for the short term. It seems to be a rather risky strategy to develop an API that will only become attractive in a matter of years - GL2 driver development is likely come to a halt almost immediately (why invest in an old version of an API, after all), but GL3 software production is not likely to start for a few years.

Let me express some disappointment, then, that GL3 neither runs on all GL2-capable cards (which I thought was a design goal), nor adopts all features of DX10-capable cards - it rather seems to be the worst of both worlds.

Chris Lux
08-14-2008, 03:37 AM
For a developer it will only make sense to start using GL3, then, once the number of GL3-capable cards is sufficiently high. Given that DX10 adoption is also pathetic so far, this is clearly not intended for the short term.
D3D10 capable hardware is quite common. it is on the market for around 2 years now with hardware available in all ranges from low cost to high end.

only vista not widespread which is the only reason why D3D10 is not used more widely.


Let me express some disappointment, then, that GL3 neither runs on all GL2-capable cards (which I thought was a design goal), nor adopts all features of DX10-capable cards - it rather seems to be the worst of both worlds.
GL3 is not the new api is was supposed to be, so only new features were added. so you can code DX9 level hardware using GL2 without losing anything.

elFarto
08-14-2008, 03:54 AM
From here (http://www.opengl.org/registry/specs/ARB/wgl_create_context.txt):


Version 10, 2008/04/08 - Changed "lite" to "preview" (still open for discussion however), and changed version "2.2" to "3.0" following the marketing TSG recommendation.
Looks like some people had some sense after all.

Regards
elFarto

Hampel
08-14-2008, 04:39 AM
Are there any infos about the Siggraph OpenGL 3 BOF available yet?

bobvodka
08-14-2008, 04:46 AM
From here (http://www.opengl.org/registry/specs/ARB/wgl_create_context.txt):


Version 10, 2008/04/08 - Changed "lite" to "preview" (still open for discussion however), and changed version "2.2" to "3.0" following the marketing TSG recommendation.


Well, I lol'd.
Worse. Recommendation. Ever.

LogicalError
08-14-2008, 05:00 AM
Are there any infos about the Siggraph OpenGL 3 BOF available yet?

not afaik. I asked the same question, got no answer.

CrazyButcher
08-14-2008, 05:07 AM
only vista not widespread which is the only reason why D3D10 is not used more widely.

what about gaming console not on par with dx10 technically? (though having benefits of more direct access).

typcially in entertainment you would build for the masses. I am not sure how reliable the "steam" stats are, but even there the gl2.0 cards would dominate by far. Imo it's not just the OS.

I hoped gl2.x generation cards would benefit from all this as well, yes a few extensions (at least for nvi driver) will ship on gl 2.x, too. But still until the time gl3.x cards really dominate, we will have to hope for gl2.x drivers getting better (read stability/speed).... which is less likely as mentioned when gl3.x drivers also need to taken care of.

why not "deprecate" and "direct_state_access" first, which would have the largest userbase, before adding new stuff?

Chris Lux
08-14-2008, 05:10 AM
I hoped gl2.x generation cards would benefit from all this as well, yes a few extensions (at least for nvi driver) will ship on gl 2.x, too. But still until the time gl3.x cards really dominate, we will have to hope for gl2.x drivers getting better.... which is less likely as mentioned when gl3.x drivers also need to taken care of.

why not "deprecate" and "direct_state_access" first, which would have the largest userbase, before adding new stuff?
i was hoping for the same thing, i only stated the current GL3 state where it is just nothing more than a bunch of extensions gone core.

with the long peaks effort i could have lived with GL3 beeing just DX9 level, but now this is a different story.

NeARAZ
08-14-2008, 05:18 AM
For a developer it will only make sense to start using GL3, then, once the number of GL3-capable cards is sufficiently high. Given that DX10 adoption is also pathetic so far, this is clearly not intended for the short term.
D3D10 capable hardware is quite common. it is on the market for around 2 years now with hardware available in all ranges from low cost to high end.
Depends on what your target market is. DX10 capable cards are high in numbers, but can be low in market share.

For something like a more casual/small games market, based on our data, around 5.7% of players have DX10 capable card. About half of them have Vista (so could actually use DX10). Targeting that as your baseline would result in much smaller market. Of course, in larger/AAA games, the hardware picture is different (see Steam stats).

(for the record, about 30% have SM3.0+ capable card, about 70% have SM2.0+ capable card)

Mark Kilgard
08-14-2008, 05:24 AM
http://www.opengl.org/registry/specs/EXT/direct_state_access.txt

interesting how AMD is missing from contributors
The presence or absence of any particular hardware vendor is less notable than how numerous engineers at ISVs were ready and willing to contribute feedback and comments on the specification. It's rather atypical for OpenGL extension specifications to have a high-degree of review and feedback from ISVs. I particularly appreciate Daniel Koch's detailed feedback.

- Mark

Chris Lux
08-14-2008, 05:33 AM
Depends on what your target market is. DX10 capable cards are high in numbers, but can be low in market share.

...

(for the record, about 30% have SM3.0+ capable card, about 70% have SM2.0+ capable card)

as i said: with the current thing that GL3 ended up to be it is not a big deal to just use GL2.x because GL3 does nothing else as to give you new features. features which you can not use if your baseline is DX9 hardware. it is the same as using and targeting D3D9 or D3D10. so you lose nothing by using GL2.x. we could have gained something with long peaks, but this did not happen.

one big win is you can use D3D10 hardware features on XP (or other systems) and market it like that...

PkK
08-14-2008, 06:58 AM
What happened to Longs Peak?
I want to ask you to take a deep breath, let this all sink in a bit, and then open up the OpenGL 3.0 and GLSL 1.30 specifications we just posted that have all new stuff clearly marked. Hopefully you'll agree with me that there's quite a lot of new stuff to be excited about.

http://www.opengl.org/registry/doc/glspec30.20080811.withchanges.pdf
http://www.opengl.org/registry/doc/GLSLangSpec.Full.1.30.08.withchanges.pdf


However these specifications differ from the ones without highlights. I just looked at the first page of the GLSL spec and noticed that it has an additional line (the Intel copyright) in magenta that isn't in the normal spec.
Are there further, more important differences?

Philipp

Timothy Farrar
08-14-2008, 09:54 AM
Seems like a lot of emo is getting in the way of an objective look at the GL API situation.

Concerning the GL3's missing object model and missing DX10 constant buffers, the end result of this functionality for the developer was to result in batch performance optimization and cleaner code. One thing DX10 completely fumbled on was support for threading so that threads could use the driver to build command buffers (internal structure the driver builds to issue commands to the GPU) in parallel without sync point issues (see the DX11 changes). I'm guessing that the immutable object model alone might at best give an under 25% at best improvement in "draw call" bound code, where as actual support for threading could yield somewhere around a 50%-75% improvement just for having one extra thread.

So now if the ARB needs more time to get the object model right, including proper threading support, so be it, let them have it! DX10 still hasn't done so. In the mean time you still have working code, which really is still competitive in performance, and you won't have to do multiple re-engineering of your engines due to huge mistakes in API design. Estimate the cost and time of re-programming and re-engineering your engine for the DX9->DX10 conversion (constant buffers + object model) and then again for the DX10->DX11 conversion (threading + etc).

Constant buffers are really tied into the object model, and usage of really does require re-thinking your engine design to work around the lack of fine grain updates of constants. So it's not unexpected that we wouldn't see constant buffers until the object model is finished.

As for the other core DX10 functionality of geometry shaders, texture buffer objects, and instancing not getting in the core GL3 spec, these are available as GL3 ARB extensions. I think it is rather clear that the reason these are extensions is to enables GL3 drivers to get built faster. So really I'm guessing the issue isn't the ARB but rather the vendors who already have to have this functionality built into their DX10 drivers, not seeing the cost to benefit ratio of placing/porting it into the GL drivers.

So if you really want GL to fail, just keep bitching, don't be objective or constructive, and just keep giving the vendors another reason not to want to bother to put any time into GL drivers.

Rob Barris
08-14-2008, 10:09 AM
As was discussed at the BoF, the #1 reason that geom shader, texture buffer objects, and instanced rendering did not get integrated into the core for 3.0 was schedule.

Re buffers holding uniforms, this is high on the priority list, but the new design hasn't completely converged yet. When it does I would anticipate going through the same cycle of spec, implement ext, prove out and benchmark, then integrate into core.

(ISV hat) improving the speed with which we can update big batches of GLSL uniforms is very important to us.

Mars_999
08-14-2008, 10:11 AM
For a developer it will only make sense to start using GL3, then, once the number of GL3-capable cards is sufficiently high. Given that DX10 adoption is also pathetic so far, this is clearly not intended for the short term.
D3D10 capable hardware is quite common. it is on the market for around 2 years now with hardware available in all ranges from low cost to high end.
Depends on what your target market is. DX10 capable cards are high in numbers, but can be low in market share.

For something like a more casual/small games market, based on our data, around 5.7% of players have DX10 capable card. About half of them have Vista (so could actually use DX10). Targeting that as your baseline would result in much smaller market. Of course, in larger/AAA games, the hardware picture is different (see Steam stats).

(for the record, about 30% have SM3.0+ capable card, about 70% have SM2.0+ capable card)

Actually, I just seen the numbers over at Tom's and they said Nvidia GF8 and above market sales are 80million plus for them, and this doesn't include ATI's DX10 GPUs. I'd say that is a decent market to sell to, hell if I could sell 1/10 of that I would be happy, and could careless about the few 100 million DX9 cards, as long as I could code with the newest features and move along progress, vs. keeping it stale.

Timothy Farrar
08-14-2008, 10:18 AM
As was discussed at the BoF, the #1 reason that geom shader, texture buffer objects, and instanced rendering did not get integrated into the core for 3.0 was schedule.


For the few of us who couldn't make Siggraph this year, is there a place where we can download (now or after siggraph is over) any BoF notes, so we can get up to speed?

Rob Barris
08-14-2008, 10:26 AM
There were about 250 people there, and at least one attendee was camcordering for most of the time.

It looks like the gears are turning to convert the slides shown into PDF's and have them posted but I do not have an ETA on it.

In the meantime any reasonably bounded questions on the content presented should not be hard to answer.

bobvodka
08-14-2008, 11:21 AM
Estimate the cost and time of re-programming and re-engineering your engine for the DX9->DX10 conversion (constant buffers + object model) and then again for the DX10->DX11 conversion (threading + etc).


While the inital DX9->DX10 change over is harsh (much like Longs Peaks was going to be) if you have an established code base, DX10->DX11 is going to be no where near as bad. If you arent' using a threaded engine already you gain nothing from the threads, but lose nothing either and if you are using a threaded engine you'll want to rip out the old code and replace it which, if you've designed it well, probably won't take that long.

Looking at everything else... well, there is no massive changes, granted tesselation and compute shaders bring in two new pipeline stages but those would have to be added anyway. The reason for this is because DX11 is a superset of DX10, other features can be added as you need them (dynamically linking shaders looks VERY useful and unless I've missed something makes GLSL look even worse than it is).

Edit:
As for giving them more time... well, let me pick myself off the floor from laughing. It's not like we pressured them for this release as it was, all that was wanted for the last 10 months was an update on what was going on. If, 8 months ago, someone had said 'hey, object model isn't going to happen yet because we have issues, but we'll improve opengl2.1 to 2.2' there would have been some unhappy people but at least we'd have been kept updated.

Maybe emotion is getting in the way, but frankly when you've been told something is coming, you've given feedback on and then to have it pulled out from under you and effectively your input dismissed, unless you are dead inside I think you've a right to be pissed.

So, yes, for OpenGL2.2 it's a good API, but for 3.0, well it's a step forward, in much the same way that a drunk falling into a gutter has taken a step forward.

Timothy Farrar
08-14-2008, 12:06 PM
In the meantime any reasonably bounded questions on the content presented should not be hard to answer.

Any vendor ETAs released as to expected GL3 driver support besides NVidia?

dorbie
08-14-2008, 12:17 PM
No.

1) For very sound technical reasons, OpenGL 3 raises the features bar especially in shaders and mandates support for new features so you don't have to worry about it being supported. You should no more expect to run OpenGL 3 on an old card than you can expect to run DX10.

2) It'll help it just great because this is *exactly* what OpenGL 3 is about, new features with guaranteed support and the prospect of a cleaner API. If you want to run in hardware on those old cards write OpenGL 2.0 code.

Ok, thank you for this confirmation.

For a developer it will only make sense to start using GL3, then, once the number of GL3-capable cards is sufficiently high. Given that DX10 adoption is also pathetic so far, this is clearly not intended for the short term. It seems to be a rather risky strategy to develop an API that will only become attractive in a matter of years - GL2 driver development is likely come to a halt almost immediately (why invest in an old version of an API, after all), but GL3 software production is not likely to start for a few years.

Let me express some disappointment, then, that GL3 neither runs on all GL2-capable cards (which I thought was a design goal), nor adopts all features of DX10-capable cards - it rather seems to be the worst of both worlds.




You have to start somewhere and you have to be forward looking.

The features promoted to core in GL2 have been held back to maintain compatability doing the same with GL3 to the extent of maintaining compatability with GL2 would completely undermine one of it's primary goals. Sorry you're missing a key point of having GL3. Now that doesn't mean you cannot look at hardware and shoot for decent support, but it shouldn't be the biggest tent you can erect.

GL2 driver development will not be dead, there's inevitably a period of GL2->GL3 transition, but given what GL3 as presented actually is, that statement is ridiculous, GL2 pretty much (unfortunately) persists within GL3, and can be used with or without the new features.

zed
08-14-2008, 12:49 PM
The reason for this is because DX11 is a superset of DX10
so youre implying MS should call d3d11, d3d10.2?
the differences from d3d10->d3d11 are certainly less than gl2.1->gl3.0.
its only a name :)

personally I wasnt interested in gl3.0 + in fact would of preferred them to release ogl2.0es + delayed gl3.0 a year or two for the reasons ive previously mentioned.
Thus aren't at all disappointed with the current situation.

OTOH The lack of communication by the ARB has been terrible, though Ild like to thank Rob Barris for actually talking over the past couple of months, perhaps he or some other person could post a message once a month on this board, giving us a rough idea or what the latest developments have been. The newsletter obviously didnt work, something less rigid like a message post (progress update sep 2008) perhaps would give better results.
It would certainly improve the ARBs image, and as ppl will tell you, image is everything (see the above opengl naming '2.2/3.0' issues).

Rob Barris
08-14-2008, 01:36 PM
In the meantime any reasonably bounded questions on the content presented should not be hard to answer.

Any vendor ETAs released as to expected GL3 driver support besides NVidia?

A representative from AMD indicated that they are targeting Q1 2009 for a complete 3.0 release, with some number of betas expected between now and then. It was also pointed out that they are planning to implement the GL3 extension pack (geom shader, texbo, instancing).

knackered
08-14-2008, 01:49 PM
what does 'indicated' mean, in that context?

Chris Lux
08-14-2008, 01:55 PM
can someone who was at the BOF session please summarize some points. mainly i am interested in the Q&A session, the answers to the question rob barris got of the other thread and some insight in where it is going from here. i mean how is the communication planned in the future, for example will the pipeline newsletter be picked up etc.?

please let us know something and lift the unnecessary cone of silence.

Rob Barris
08-14-2008, 02:01 PM
As simply I can put it, we wanted to have a very high level of confidence this time around that the process was complete and implementations could move forward before announcing the specifics.

Releases > Roadmaps.

There are some IP rules that limit what we can and can't share during the Khronos process - this reduces total transparency of course - but that doesn't stop us from having some focused discussions about refinements that people want.

Chris Lux
08-14-2008, 02:05 PM
i found the newsletter (despite its now obsolete content) really good. especially when it really appeared every 3 months.

maybe this way the process to GL 3.1 can be made more open as far as you all can go. i think the feedback of the community on some directions can help, now that the future direction is defined (deprication model, EXT_direct_state_acces etc.).

Korval
08-14-2008, 02:27 PM
but that doesn't stop us from having some focused discussions about refinements that people want.

No, it just stops you from providing those refinements.

You already know what we want. We've told you time and again what it is we want. Having to constantly repeat ourselves, only to be ignored in the end, is ultimately very silly.

knackered
08-14-2008, 04:00 PM
<columbo>
err, just one more thing....
</columbo>
why didn't the ARB just decide to write a full 2.1 emulator on top of the new object model? It would only have to be written once, by one organisation, and you could plug it into every IHV's "GL3" drivers.
get nvidia to provide their display list compilation code and you're good to go.
So why wasn't this approach considered, rob/barthold/mark?

None of this makes sense, I'm not happy with the explanation...."1000's of lines of GL code" - how could a renderer be written with 1000's of lines of GL code??!

Chris Lux
08-14-2008, 04:21 PM
why didn't the ARB just decide to write a full 2.1 emulator on top of the new object model? It would only have to be written once, by one organisation, and you could plug it into every IHV's "GL3" drivers.
get nvidia to provide their display list compilation code and you're good to go.
So why wasn't this approach considered, rob/barthold/mark?
as far as i can remember it was considered. but we got no answer why it was not enough. i can only speculate that noone wanted to be the one to code it.

knackered
08-14-2008, 04:40 PM
I'd have coded it, and maybe charged a nominal paypal fee for my trouble. I've already done a GL-to-d3d8 wrapper in the dim and distant past, and it was EASY - and there'd be no handedness/origin/transpose bollocks to contend with this time.

...edit...
Having said that, I'd be going from a state machine to an object model this time, so maybe I've understated the difficulties somewhat. That's the advantage of being an anonymous coward shouting from the sidelines.
Maybe pay the gremedy people to do it, it'd be right up their alley.

knackered
08-14-2008, 05:06 PM
so let me get this straight:
1/ some unnamed companies objected to the new object model because they'd have to change 10's of thousands of lines of code.
2/ These same companies want their software to work on past/current/future generations of hardware unchanged.
3/ But they also want to be able to take advantage of the latest hardware features without changing old code.

Now to me there's a clear divide between the first two requirements and the third.

No company in my experience would have had a problem with a split in API's, so long as the old API were maintained for a reasonable length of time.

That leads me to the conclusion that the IHV's did a cost/gain analysis of writing an implementation of a new API across the LAST 4 hardware generations PLUS maintaining the old API over the NEXT 4 hardware generations - and came to the conclusion that it wasn't worth the trouble. In other words it would be cheaper just to increment OpenGL in the usual way, by promoting existing extensions to core.
By jove I think I've got it. The IHV's wouldn't stump up the money to commit to all this dev work, and thought they'd indirectly blame the CAD companies.

Can't say I blame them, it would have cost an awful lot of cash/wonga/spondoolics/moula/money. If only they didn't have to support 2.1 at all, things would probably be different. The emulator idea would have been a good solution - all 3 IHV's club together, clean break, job's a good 'un.

Rick Yorgason
08-14-2008, 05:30 PM
so youre implying MS should call d3d11, d3d10.2?
the differences from d3d10->d3d11 are certainly less than gl2.1->gl3.0.
its only a name :)
Version numbers are expected to be internally consistent. For DirectX, we expect new major features (like GPGPU support and tessellation support) to get a new major version number. Minor version numbers are used for bug fixes and minor features.

For OpenGL, we're used to the minor version numbers representing more extensions, and we expect major version numbers to represent a fundamental change in the way you use the API (like introducing shaders).

What we're getting in OpenGL 3.0 is some new extensions and the ability to turn off all the deprecated 1.x features. It definitely would have been a better marketing decision to release this as "OpenGL 2.2" (maybe "OpenGL 2.2 Strict" if you're turning off the deprecated 1.x featuers) and assure us that OpenGL 3.0 -- the OpenGL 3.0 we were promised -- is still in the works.

Don't get me wrong, people would still be angry. The fact is, we should have had these extensions a year and a half ago, but we didn't complain because we were promised that if we were patient we'd get a brand spanking new API! However, people wouldn't be as angry if it didn't appear that the ARB has simply wasted two years and given up.

Now the ARB promises that the next release will come within 12 months, and this will probably just be more DX10 extensions. But with OpenCL and Larrabee on the horizon, I wouldn't be surprised if we start seeing some new graphics APIs coming out of the open source community in a little over 12 months. In the meantime, maybe I'll start testing my DirectX code under Wine.

dorbie
08-14-2008, 05:38 PM
so let me get this straight:
1/ some unnamed companies objected to the new object model because they'd have to change 10's of thousands of lines of code.
2/ These same companies want their software to work on past/current/future generations of hardware unchanged.
3/ But they also want to be able to take advantage of the latest hardware features without changing old code.

Now to me there's a clear divide between the first two requirements and the third.

No company in my experience would have had a problem with a split in API's, so long as the old API were maintained for a reasonable length of time.

That leads me to the conclusion that the IHV's did a cost/gain analysis of writing an implementation of a new API across the LAST 4 hardware generations PLUS maintaining the old API over the NEXT 4 hardware generations - and came to the conclusion that it wasn't worth the trouble. In other words it would be cheaper just to increment OpenGL in the usual way, by promoting existing extensions to core.
By jove I think I've got it. The IHV's wouldn't stump up the money to commit to all this dev work, and thought they'd indirectly blame the CAD companies.

Can't say I blame them, it would have cost an awful lot of cash/wonga/spondoolics/moula/money. If only they didn't have to support 2.1 at all, things would probably be different. The emulator idea would have been a good solution - all 3 IHV's club together, clean break, job's a good 'un.

You're right about the "CAD" companies.

However I don't think that could possibly have been the conclusion of such an analysis if the right question were asked. Particularly considering that you don't have to go back 4 generations and your driver work ports on top of the new 3 going forward and the interractions between legacy and new features are potentially a nightmare... AND the 3.0 as it stands supports all of the above anyway, so there's no work eliminated.

This may have been a cynical powerplay or just been what they could desperately pull together with siggraph looming, but enough Kremlinology. There's hope that 3.1 could be better and take out the trash.

Korval
08-14-2008, 05:39 PM
But with OpenCL and Larrabee on the horizon

*sigh*.

You can't write graphics APIs on this stuff!

Open source developers are not going to beat Intel's performance on Larrabee.

Rick Yorgason
08-14-2008, 05:45 PM
so let me get this straight:
1/ some unnamed companies objected to the new object model because they'd have to change 10's of thousands of lines of code.
2/ These same companies want their software to work on past/current/future generations of hardware unchanged.
3/ But they also want to be able to take advantage of the latest hardware features without changing old code.
Here's the divine comedy of the whole situation: in an attempt to clean up the API, they're all but promising that older features are going to start disappearing! So everybody is going to have to rewrite their code anyway!

Congratulations, ARB! You gave us all of the detriments of a new API with none of the benefits!

Rick Yorgason
08-14-2008, 05:53 PM
You can't write graphics APIs on this stuff!
You can, but as you mentioned, it won't be as efficient. Still, it means people can experiment with new APIs (which will still be far more efficient than regular CPU code) and hardware support could come later.

Besides, non-Windows programmers only have two other choices: use OpenGL's state-based API, or use DirectX under Wine. When it comes to performance, they don't have any acceptable solutions!

dorbie
08-14-2008, 06:05 PM
You can't write graphics APIs on this stuff!
You can, but as you mentioned, it won't be as efficient. Still, it means people can experiment with new APIs (which will still be far more efficient than regular CPU code) and hardware support could come later.

Besides, non-Windows programmers only have two other choices: use OpenGL's state-based API, or use DirectX under Wine. When it comes to performance, they don't have any acceptable solutions!

What the heck are you talking about?

knackered
08-14-2008, 06:10 PM
Besides, non-Windows programmers only have two other choices: use OpenGL's state-based API, or use DirectX under Wine. When it comes to performance, they don't have any acceptable solutions!
well now hold on there for one goddam minute. OpenGL is still way faster than d3d9 on high batch counts. At least NVOpenGL.
She may be fat'n hugly, but she can sure shift her weight when she wants.

Rob Barris
08-14-2008, 06:10 PM
so let me get this straight:
1/ some unnamed companies objected to the new object model because they'd have to change 10's of thousands of lines of code.
2/ These same companies want their software to work on past/current/future generations of hardware unchanged.
3/ But they also want to be able to take advantage of the latest hardware features without changing old code.
Here's the divine comedy of the whole situation: in an attempt to clean up the API, they're all but promising that older features are going to start disappearing! So everybody is going to have to rewrite their code anyway!

Congratulations, ARB! You gave us all of the detriments of a new API with none of the benefits!

So, if a vendor ships a driver that can create and support 3.0, 3.1 and 3.2 contexts, which developers are impacted ?

The specification doesn't mandate which revision the IHV's support. Nor does it mandate when developers must move to a newer version.

3.0 is the cross over point: it adds many new features into core, but no legacy API's have been touched for this release. Which applications are impacted by it?

knackered
08-14-2008, 06:15 PM
so when's the object model going to be introduced? 3.3? never?
seems to me if ISV's have gone to the trouble of updating their code to your strict profile, they're going to be none too happy about having to revisit the same code to port it to the object model.
probably best to be honest with us at this stage.

Korval
08-14-2008, 06:15 PM
So, if a vendor ships a driver that can create and support 3.0, 3.1 and 3.2 contexts, which developers are impacted ?

All of them, because the IHVs will have to either:

1: Support three separate codebases, which leads to more driver bugs and inability to take optimization opportunities.

2: Support 2 codebases (3.0, and 3.1+), which causes the same problems.

3: Support 1 codebase, which would be unable to take advantage of any optimization opportunities offered by better, cleaner APIs.

Having 3 separate contexts in a single driver does nothing for anyone.


seems to me if ISV's have gone to the trouble of updating their code to your strict profile, they're going to be none too happy about having to revisit the same code to port it to the object model.

As part of GL "3.0", you have to actually ask the context for a specific version now. So it is possible for a developer to ship a 3.0 context for legacy applications (who can get at new stuff through core extensions) and a 3.x context which removes the cruft.

We're never going to see the new object model, btw. Just give up on that one. The best we can hope for are general cleanup like direct_state_access and such.

Leadwerks
08-14-2008, 08:12 PM
A representative from AMD indicated that they are targeting Q1 2009 for a complete 3.0 release, with some number of betas expected between now and then. It was also pointed out that they are planning to implement the GL3 extension pack (geom shader, texbo, instancing)
That sounds like a statement crafted to make you think one thing, while avoiding responsibility and maintaining plausible deniability.

Brolingstanz
08-14-2008, 08:24 PM
Hey, he's talking about a graphics API, not area 51.

Good to know that particular extension pack is en route from ATI.

Mars_999
08-14-2008, 09:03 PM
Yes, ATI drivers that up to date for OpenGL I can't wait, so my code can run on other computers!!!

pudman
08-14-2008, 09:10 PM
which developers are impacted ?
All of them, because the IHVs will have to either:

I believe Rob was asking which non-IHV developers would be impacted. Obviously IHV driver developers will have an entertaining time supporting all those contexts.

Let's imagine a scenario. I have a great tool with 1000s of lines of GL code evolved over a decade. Lots of fixed-function stuff, a bunch of newer fancier things. GL3.0 comes out. What do I do?

I definitely want the marketing bullet of "Requires OpenGL3.0!" It makes me look cooler than the competition. Bonus, I now have to only tweak my usage of some things I was using as extensions in 2.1 that are now core.

Is my app faster? Cleaner? Well, if AMD puts out a 3.0 driver I can tell my customers they can now buy ATI hardware on their Linux machines and still run my stuff. But I can't actually tell my customers that for at least another 6 months, so no gain there.

So the point was...?

Korval
08-14-2008, 09:38 PM
I believe Rob was asking which non-IHV developers would be impacted. Obviously IHV driver developers will have an entertaining time supporting all those contexts.

Yes, and things that cause driver developers to have an "entertaining time" always falls on our plate.

PaladinOfKaos
08-14-2008, 10:06 PM
Yes, ATI drivers that up to date for OpenGL I can't wait, so my code can run on other computers!!!

I know the feeling. I've been playing with the SM4 extensions with a GeForce 8600 for a while now. Gonna be nice to be able to try ATI hardware, especially since I'd rather buy my next new card from them now (what with the open-source RadeonHD driver and all)

Rob Barris
08-14-2008, 10:13 PM
which developers are impacted ?
All of them, because the IHVs will have to either:

I believe Rob was asking which non-IHV developers would be impacted. Obviously IHV driver developers will have an entertaining time supporting all those contexts.

Let's imagine a scenario. I have a great tool with 1000s of lines of GL code evolved over a decade. Lots of fixed-function stuff, a bunch of newer fancier things. GL3.0 comes out. What do I do?

I definitely want the marketing bullet of "Requires OpenGL3.0!" It makes me look cooler than the competition. Bonus, I now have to only tweak my usage of some things I was using as extensions in 2.1 that are now core.

Is my app faster? Cleaner? Well, if AMD puts out a 3.0 driver I can tell my customers they can now buy ATI hardware on their Linux machines and still run my stuff. But I can't actually tell my customers that for at least another 6 months, so no gain there.

So the point was...?

Any app needs a platform to run on. It would be premature to ship an app requiring 3.0 if you don't feel there's critical mass out there for it (i.e. if an IHV hasn't got drivers ready for you yet).

Of course the decision and scheduling of shipping a product is not the same thing as developing a product.

But I have to ask, if you have concerns with respect to an IHV producing driver software quickly enough for your needs - do you believe that situation would be made better or worse by that IHV starting from a blank slate ?

HenriH
08-14-2008, 11:51 PM
In the meantime any reasonably bounded questions on the content presented should not be hard to answer.

Was there any information regarding to the previously planned object model being implemented in some future versions of OpenGL, or should we expect the proposed EXT_direct_state_access extension being promoted to the core?

If you ask me, I would prefer the original object model much better and would hope to see that being the future of OpenGL.

edit: Another question; can we hope to see the equivalent GLX extension for creating OpenGL 3.0 contexes someday and Linux drivers?

Korval
08-15-2008, 12:02 AM
if you have concerns with respect to an IHV producing driver software quickly enough for your needs - do you believe that situation would be made better or worse by that IHV starting from a blank slate ?

Define "blank slate".

Beneath OpenGL is the underlying interface that the client-side GL implementation uses to talk to the actual driver. The driver and some of the low-level marshalling stuff? That wouldn't change with Longs Peak. At that level, buffer objects are pointers, texture objects are references to certain state, and so forth.

Longs Peak would not have made an impact on the lowest level of code. What it would have done is gotten rid of the many, many layers of client-side code that have built up over the years.

And Intel didn't have much of a slate to begin with, so them starting from scratch wouldn't hurt anybody.

There's also the simple fact that Longs Peak was a simpler API. The sheer quantity of screwups possible with some of the client-side decisions (how objects are handled being the big one. Even if the GL generates all object names, you still need to map them, orphaning objects is still a pain, etc) is substantial. A simpler API is an easier-to-implement and maintain API.

pudman
08-15-2008, 06:22 AM
Of course the decision and scheduling of shipping a product is not the same thing as developing a product.

I guess I'm still trying to understand the impact of this GL3.0. Driver are actually a secondary concern when discussing migration of existing 2.1 (using DX10 features) apps to 3.0 because only nvidia has provided those.

So there's two things:
1) What benefit is there for me to port to 3.0? I see *potential* support on non-nvidia platforms and also knowing now whether my code may be forward-compatible (which I actually don't care about as I expect to tweak my app anyway for each new GL release to obtain maximum performance).
2) Once again, why was the new object model held back 3.0? I'd guess that most developers agree with #1 so if I didn't want to significantly rewrite my app I could stick with 2.1. It's not like I'm missing any new features. I can even use the DSA extension on my 2.1 code.

I would love to hear about reasons for upgrading. Real reasons.

Rob Barris
08-15-2008, 09:48 AM
So we have a model now where a driver can present more than just one GL version. The number of versions presented is up to the IHV.

It's being done this way to leave choice in the developer's hands.

If your app is fine on 2.1 and the IHV's you run on have pledged to keep 2.1 available, there is no issue at hand that I can see.

If you want to use some new features but not move to GL 3.0's hardware requirement level, have a look at the GL 2.x extension set released this week and talk to your IHV's about their plans for support.

If you can enumerate a few of the improvements available in GL 3.0 that your app could use, then that can figure into your decision as to which path it makes sense to take. If you can't come up with any, then I think it just comes down to how long you and your IHV's expect 2.1 to be supported. The result of that evaluation will tend to change over time.

Yfrwlf
08-15-2008, 11:03 AM
For the last time, OpenGL is not open source! The term "Open" in this case referring to the fact that it is an open standard defined by a standards body (rather than one entity, ala Direct3D or IrisGL).

Oh, this whole time I was deceived, should have read the license. ^^

This sounds like a great excuse for an open source graphics library and API, doesn't it? This is one of the many reasons why you don't want software being controlled by a single entity, so that it can't suddenly come crumbling down.

If all the communities of developers out there got together to develop a graphics library that could have all the features they wanted and was modular enough that they didn't have to wait so long for improvements to come about, that would be great. Of course it'd need to have all the goodies like being cross-platform, very modular so easy to program for, etc. They should get together and support something that's completely open like that. If a group would come together and figure out the best system they could that would support and satisfy graphics programmers, game developers, etc, that could be the answer and ultimate end result of all this exasperation over OpenGL failing them.

bobvodka
08-15-2008, 11:09 AM
Great idea... now, how did you plan to program the hardware?

Brolingstanz
08-15-2008, 11:14 AM
Free market capitalism is the best path to prosperity..........

Yfrwlf
08-15-2008, 11:32 AM
Great idea... now, how did you plan to program the hardware?

Good question, it's too bad hardware doesn't use a single method for communication which is then handled by either D3D or OpenGL, but I'm sure big differences in the way things are processed, etc, prevent this. Perhaps you could use some kind of wrapper or some way for the new graphics library to handle OpenGL and D3D calls? Wine already does a pretty good job dealing with D3D calls I think and is improving all the time.

How about you simply use Wine or D3D for games only using D3D, OpenGL for games only using OpenGL, and start on a graphics library that addresses the specific limitations of the other graphics libraries. Lets say you make a new library called Actually Open Graphics Library, and it had some great features that were lacking in, say, OpenGL. Since this library would be cross-platform, it could be used along with OpenGL on systems that weren't Microsoft. Since it was completely open source, and because it provided new functionality and features, why shouldn't graphics card makers want to support it if it gains popularity and is seen to be important? Same way anything else works, really. Either implementing OpenGL calls or creating completely new, re-designed calls could then be projects that would come after that, since using two different libraries would be a little strenuous I'm sure, so eventually you could just get rid of OpenGL, if that's the direction the project took and what developers wanted.

I don't know, do you have ideas as to what the best way to go about it would be? OpenOffice is completely open source, and it along with the ODF format is gaining more and more traction and use all the time, it's being used across many industries and governments because the need is there, so I'm quite sure it's not impossible to do for a graphics library. I'm sure graphics card companies would be interested in helping and getting involved if there was a big enough push and OpenGL was seen as being way too limiting like some developers here feel it is apparently.

Korval
08-15-2008, 11:39 AM
it's too bad hardware doesn't use a single method for communication which is then handled by either D3D or OpenGL

D3D and OpenGL are the method for communicating with the hardware. That's what they're for. If there were another "single method for communication," we'd be using that.


I don't know, do you have ideas as to what the best way to go about it would be?

There isn't one; that's the problem. The ARB's made of Fail, and there's nothing that can be done about it.

Yfrwlf
08-15-2008, 12:05 PM
D3D and OpenGL are the method for communicating with the hardware. That's what they're for. If there were another "single method for communication," we'd be using that.

OK, the only point I'm trying to make is what's needed to use a system which is free from a single entity to solve these problems that you're struggling against OpenGL with, that's all. If the OpenGL or D3D APIs provide that, so be it, but obviously in D3D's case you'd need to create a different library since even if you can use the API and not be tied to Microsoft, their library is closed and tied to Microsoft OSes.



I don't know, do you have ideas as to what the best way to go about it would be?

There isn't one; that's the problem. The ARB's made of Fail, and there's nothing that can be done about it.

So Khronos and Microsoft are the only two graphics API/library gods in existence, and there can be no third completely open source solution which isn't at the mercy of a single entity? I beg to differ. There's always a way. While it may be an uphill struggle, where there's a need there's a way.

Korval
08-15-2008, 12:13 PM
there can be no third completely open source solution which isn't at the mercy of a single entity?

Yes, that's pretty much where we are.

It stems from the fact that IHVs have complete control over the interface. You can't write an alternative because you don't have a way to access the hardware that isn't already OpenGL or D3D.

Brolingstanz
08-15-2008, 12:20 PM
While it may be an uphill struggle, where there's a need there's a way.

I think the saying goes "Where there's a will there's a way." And I think you'll find that most folks are big on way but short on will.

On a personal note, I'm perfectly content with the way GL3 turned out; it's all set to move forward now.

PaladinOfKaos
08-15-2008, 12:35 PM
AMD has been releasing hardware specs, and the Intel Linux driver is open-source (although not actual spec has been released). Thus if people were interested, they could make a new API. It would be even easier to layer it on Gallium, once a few more backends for that are completed.

And that's no problem on Linux. But on Windows there's a catch - driver signing!. Vista x64 won't allow unsigned drivers, and Windows 7 won't allow unsigned drivers period. So there'd be no way to port it to Windows and keep it open. I'd also expect that with the DRM embedded in the kernel, you'd have to sign a contract saying you won't release any source code, since that could break the DRM.

bobvodka
08-15-2008, 01:01 PM
Also, NV haven't been putting out specs.

No NV, No Windows = mediocrity and fail.

skynet
08-15-2008, 01:03 PM
I think the saying goes "Where there's a will there's a way." And I think you'll find that most folks are big on way but short on will.
This is how it might work:
1. Become a hardware vendor
2. Develop, produce and sell a REALLY fast, good, new gfx card
3. Deliver the card with working OpenGL and DirectX drivers, if possible for all major OS'es
4. Now your new card can show its muscles with existing software.
5. Gain a comfortable market share of lets say....80%
6. Develop a shiny new API that all developers ever dreamed of.
7. Somehow force more and more software to be developed for or ported to your new API. Maybe by slowly abandoning the old APIs once you are market leader. This might take several generations of new hardware.

Of course, all along the way you have to fight existing market forces, competitors etc... I don't think you can start at 6) ;-)

Eddy Luten
08-15-2008, 01:25 PM
Hypothetically; Since SGI still owns the copyrights to OpenGL, could they intervene with Khronos and assign a new functioning body?

@skynet, as far as alternative APIs go: Glide.. fail. Fahrenheit.. fail. OpenGL++.. fail. If you're stuck with a non-Windows platform, you're really stuck unless Gallium3D makes it big through some incredible miracle and Open Source APIs make it.

dletozeun
08-15-2008, 01:39 PM
Or takeover NVidia and ATI at the same time, and you can jump to 6) right now. :p

Yfrwlf
08-15-2008, 01:40 PM
I think the saying goes "Where there's a will there's a way." And I think you'll find that most folks are big on way but short on will.
This is how it might work:
1. Become a hardware vendor
2. Develop, produce and sell a REALLY fast, good, new gfx card
3. Deliver the card with working OpenGL and DirectX drivers, if possible for all major OS'es
4. Now your new card can show its muscles with existing software.
5. Gain a comfortable market share of lets say....80%
6. Develop a shiny new API that all developers ever dreamed of.
7. Somehow force more and more software to be developed for or ported to your new API. Maybe by slowly abandoning the old APIs once you are market leader. This might take several generations of new hardware.

Of course, all along the way you have to fight existing market forces, competitors etc... I don't think you can start at 6) ;-)

Open source shouldn't be about forcing, open source should be about choice. Graphics APIs allows for more choice by providing a common communication standard that any program can use, or in other words provides modularity. I don't agree with your summary of market forces. If developers hated the existing graphics APIs and started promoting a new one and jumping onto that, it's in the graphics card manufacturer's best interest to support what is popular.

Open source operates on the principal of natural selection. The things which are needed and helpful in some way will become the dominant players and will become popular, more simplistically than things which are controlled and especially controlled by one entity because those are much more susceptible to politics.


Also, NV haven't been putting out specs.

No NV, No Windows = mediocrity and fail.

NV would have to jump on eventually, or be fated to only offer products for Windows while AMD would pwn the Linux, Mac, and lots of other platform markets.


AMD has been releasing hardware specs, and the Intel Linux driver is open-source (although not actual spec has been released). Thus if people were interested, they could make a new API. It would be even easier to layer it on Gallium, once a few more backends for that are completed.

And that's no problem on Linux. But on Windows there's a catch - driver signing!. Vista x64 won't allow unsigned drivers, and Windows 7 won't allow unsigned drivers period. So there'd be no way to port it to Windows and keep it open. I'd also expect that with the DRM embedded in the kernel, you'd have to sign a contract saying you won't release any source code, since that could break the DRM.

Well I'd imagine you could have a module or whatnot that was closed off that installed the DRM crap, and they'd be free to do that if they wanted to (really retarded though of course. Oooh oh no, I can more easily take a video capture of your game now. Big freaking whoop.) Any way, if Microsoft wants to find ways to disallow software on their OSes, that's fine, the stuff that's more open will just take over since it will be all the other platforms and all their benefits and the benefits of openness vs. them, and I can tell you who will eventually win there.

But yeah, if OpenGL is too closed and not advancing fast enough, and same with D3D, then the time sounds ripe to start a third one to me and just somehow try to compliment the existing ones or something, whatever it takes to become noticed and used as quickly as possible to get the creative juices heading in the right direction.

Unless of course these things get addressed by Khronos, which they may be...eventually...for those that want to wait and hope. ^^


I'm perfectly content with the way GL3 turned out; it's all set to move forward now.

I'm glad you're cozy with it, but if it doesn't meet the needs of some developers, I hope the push to provide for those needs succeeds somehow, whether it's Khronos answering those needs or another completely separate solution. Everyone should wish that. Software should never have to stay limiting for long, or be limiting to begin with.

Brolingstanz
08-15-2008, 01:43 PM
The funny thing is that once you jump through all those hoops you probably end up with ... OpenGL.

If some independent *commercial* entity manages to create an interface product that somehow *induces* the various parties to abandon their respective ships, I think we'll all be a bit surprised.

You just can't please everyone. I think there comes a time when you need to be grateful for what you have and thank your lucky stars that things aren't any worse than they are.

Yfrwlf
08-15-2008, 02:14 PM
AMD has been releasing hardware specs, and the Intel Linux driver is open-source (although not actual spec has been released). Thus if people were interested, they could make a new API. It would be even easier to layer it on Gallium, once a few more backends for that are completed.

I'd imagine that of course a lot of work would still be needed. Maybe there's some API that could fill the spot though which has died, and this situation is a reason to bring it back and modify and work on it, instead of starting from scratch. I'm just really surprised that an open source solution hasn't already been successful, but maybe this event could be a catalyst to help make it so. I would think there would be many sources which would be interested in seeing a fully open source solution, especially of course if it had more features than the competitors. It'd certainly be capable of doing so, it's pretty hard for any single entity to compete against the whole world.

Korval
08-15-2008, 02:34 PM
I'm just really surprised that an open source solution hasn't already been successful, but maybe this event could be a catalyst to help make it so.

What part of, "there's no way to make it work" don't you understand? Open source is not some magical salve that you can rub on any problem to cure it; it has requirements to work, and one of those requirements is actually able to do the thing that the code it's trying to replace does. Without being able to write certified Windows drivers, you can't make Windows drivers. And without being able to make Windows drivers, you can't access graphics hardware without D3D or OpenGL.

PaladinOfKaos
08-15-2008, 02:51 PM
Korval is, unfortunately, very right. Even with a closed blob that talks to Windows, you couldn't do it. Microsoft only certifies drivers if they fit within a device class that they have. (And you can't blame the MPAA for that bit.) A new graphics API is certainly not on that list. Although with the dominance of DirectX on that platform, there may be an anti-trust opening if they block a new type of graphics driver by refusing to sign it, but of course IANAL.

bobvodka
08-15-2008, 04:06 PM
Also, NV haven't been putting out specs.

No NV, No Windows = mediocrity and fail.

NV would have to jump on eventually, or be fated to only offer products for Windows while AMD would pwn the Linux, Mac, and lots of other platform markets.


And now I'm now laughing...

Fated for the product which has the majority share of the market? Oh no! How ever will they cope! Linux and OSX don't even match Vista, the most FUD'd to death OS EVER, in market share.

That's the reality of it.

Yfrwlf
08-15-2008, 05:48 PM
What part of, "there's no way to make it work" don't you understand? Open source is not some magical salve that you can rub on any problem to cure it; it has requirements to work, and one of those requirements is actually able to do the thing that the code it's trying to replace does. Without being able to write certified Windows drivers, you can't make Windows drivers. And without being able to make Windows drivers, you can't access graphics hardware without D3D or OpenGL.

What part of Linux, BSD, Solaris, Mac OS X, and any other non-Microsoft OS don't you understand? </[censored]> Gah, that hurt to say that.

To get back to respectful logical arguments, if Microsoft isn't willing to certify a graphics driver for an open graphics standard which becomes very popular but which competes with D3D, then that's fine, they can use their closed system and everyone else can use the open one. I'm definitely not tied to Bill's apron strings, so I'm not worrying. What do you think the whole real point of their certification process is? It's for control, for things like this, to dis-allow competition. So, when that gets in the way, it's time to move to other systems.

If you don't want to take that jump, that's fine, for you then there may not be a solution until Khronos gives it to you. For others, if they took that jump, or worked on it on the side, they could certainly help create one. Also, if that solution did become big, there's no way MS would not sign, at the very least, a driver from NV or AMD that supported it. Just depends how far they are willing to go to squeeze the industry I guess. If developers weren't scared to jump off their ship, that would certainly be one factor to drive them to certify it.

Korval
08-15-2008, 06:03 PM
What part of Linux, BSD, Solaris, Mac OS X, and any other non-Microsoft OS don't you understand?

They don't matter. Oh sure, MacOS X matters to Rob, because he works at Blizzard who are big-time Apple fans, and they want to make a version of their games for the Mac. And Linux matters to Id, because Carmack is a big-time Linux fan, and they want to make a version of their games for Linux.

The entire rest of the PC game development community (outside of indies) don't care about Linux or MacOS X. They represent 0 % of their sales, and thus would not be the slightest bit concerned about being cut out of that market. As Bob pointed out, Linux and OSX aren't even up to Vista levels of marketshare. Or, put another way, game developers care more about Vista than Linux or MacOS X.

And what of other applications that aren't games? Oh, they care. But thanks to their archaic codebases, they wouldn't even accept Longs Peak, a fundamental rewrite of the OpenGL API. If they won't accept that, they're certainly going to ignore whatever Linux or MacOS X specific stuff there is in favor of just moving along with OpenGL.

Oh sure, you'll get a few converts. But not enough to matter.

Oh, and MacOSX isn't going to work either, because Apple, who is 10x more controlling than Microsoft, isn't going to let you write drivers for them either. At least with Microsoft, you'd have a chance at applying for the driver developer license and whatever other hoops they make you jump through. Apple only grudgingly lets other people develop drivers for their products, and even then, Apple controls the OpenGL implementation much the same way that Microsoft controls the D3D implementation.

Zengar
08-15-2008, 06:05 PM
Why should anyone want to use an "open solution"? Apple provides extremely good OpenGL support on OSX for example. And I hope you understand that developing a 3d driver is a non-trivial task, starting from the fact that IHVs except Intel still haven't opened the interface to their cards.

A possibility I can see is to develop a particularly convenient API that will be implemented as GL/D3D wrapper at first. If this API proves successful, native drivers will emerge in some time.

P.S. It is not the thing that developers are scared to jump of MS's ship. It is the thing that MS OSes are installed on 80% of computers out there. Every gamer uses an MS OS.

Yfrwlf
08-15-2008, 06:42 PM
The entire rest of the PC game development community (outside of indies) don't care about Linux or MacOS X. They represent 0 % of their sales, and thus would not be the slightest bit concerned about being cut out of that market. As Bob pointed out, Linux and OSX aren't even up to Vista levels of marketshare. Or, put another way, game developers care more about Vista than Linux or MacOS X.

Maybe that's why they represent 0% of their sales, because they don't care about them and haven't made any titles for them, haha. It's pretty hard to sell to other platforms if you don't make software for them. The market is geared towards Windows, yes, I get it, of course it is. But, it's a catch-22, and if developers started making more programs for Linux and Mac, they would start doing better. Obviously Linux or one of the open platforms is the ideal solution since it's not controlled at all as you pointed out, and free. The market share is changing, and is definitely not 0%, and reasons like this whole thing are good reasons to move to open systems, but of course you do what you want.

"We can't switch, Microsoft is the biggest, how could we switch to a competitor." What a horrible attitude. If everyone started switching, they wouldn't BE the biggest. What do you think made them work on improving IE? Users switching. So if this gives some developers out there more reason to do so, great, I hope they switch. Again, no one should have to be restricted in any way, neither users nor developers, when it comes to software, the only limits of which should be imagination.

RenderBuffer
08-15-2008, 06:48 PM
While OpenGL 3.0 wasn't everything I was hoping for, I'm glad some progress was made, and they're not going to throw out all of their work. I'm looking forward to future progress, and I hope that we see quick advancements.

As far as creating a new graphics standard, it's not entirely out of the question. On Larrabee it should be possible, and with future CUDA/CAL/OpenCL advancements, I can imagine it; You won't need to worry about signed drivers. One of the great things about this is that there are a number of papers from conferences like Siggraph which list algorithms that would need modified hardware to implement efficiently. In the future, with fully programmable hardware, the hardware won't need to change and we'll see these things being hardware accelerated for the first time.

That said, I don't think we need a new graphics standard *right now*. A few updates to OpenGL would be nice, but I'll take what we've got.

Thanks to the ARB for their hard work.

Andrew.

Yfrwlf
08-15-2008, 07:07 PM
Why should anyone want to use an "open solution"?

Because of the subject of this thread among other reasons, plus the fact that it's open. Open means that communities have more say and that no one entity can destroy it because it's not their creation to destroy. It also means contributions won't be susceptible to being lost forever if that happens. Basically it's just safer for everyone in general. For Khronos to leave OpenGL closed is foolish, as it means they have to maintain it, improve it, and do everything, while if it was open others could help.


A possibility I can see is to develop a particularly convenient API that will be implemented as GL/D3D wrapper at first. If this API proves successful, native drivers will emerge in some time.

That's...what I already said. =P That might indeed work, I don't know, or just have the missing and wanted calls be for a third library, so you can use D3D or OGL for the main part but an extra library for wanted extra features?


P.S. It is not the thing that developers are scared to jump of MS's ship. It is the thing that MS OSes are installed on 80% of computers out there. Every gamer uses an MS OS.

Not me and I know many others don't too, but you're right that the majority does of course, whether it's Windows or the Xbox. More who switch though will mean more games for me and others on my platform, too, so I hope they switch so they can help my OS break out of the catch-22 even further. The thing is, since it's completely free and open, there's so much going for it that it's pretty inevitable. The only thing that's truly saving MS IMO is the fact that it comes pre-installed on most computers, so I really hope that continues to change, and wish it were illegal not to offer a no-OS or other OS option at sales time, but that's another topic. At least there are several PCs on the market now which offer Linux.

Sure, running Microsoft games through Wine is cool and all and will help with the problem, but what I really want is native games eventually running on open solutions, like the one we're discussing for example.

*shrug* Well, the comment about Gallium3D helping out an open graphics library solution and open API was a neat idea, so I hope something along those lines comes about if OpenGL is going to be restricted, but I hope that all this complaining will yield something constructive like great additions to it in the next revision. It's still too bad though that a truly open solution hasn't found it's way into the light as ultimately that would be a better solution than OpenGL.

HenriH
08-15-2008, 07:54 PM
A possibility I can see is to develop a particularly convenient API that will be implemented as GL/D3D wrapper at first. If this API proves successful, native drivers will emerge in some time.

That's...what I already said. =P That might indeed work, I don't know, or just have the missing and wanted calls be for a third library, so you can use D3D or OGL for the main part but an extra library for wanted extra features?

There have been numerous attempts to do this in the past (Fahrenheit, OpenGL++, ParnardVision... to mention few) and they generally have not been very succesful. The main problem here being the fact that this would introduce a yet another API to learn for developers without any security whether or not the new API will be a failure just like the others.

I think OpenGL is the way to go, but there is a lot of work to be done for it...

Yfrwlf
08-16-2008, 01:22 AM
There have been numerous attempts to do this in the past (Fahrenheit, OpenGL++, ParnardVision... to mention few) and they generally have not been very succesful. The main problem here being the fact that this would introduce a yet another API to learn for developers without any security whether or not the new API will be a failure just like the others.

I think OpenGL is the way to go, but there is a lot of work to be done for it...

OpenGL is the way to go if it offers what developers and end users want, sure, agreed, but if functionality is missing, maybe a new library could give developers this without replacing the entire system, as a simple compliment to OpenGL. I'm assuming open community OpenGL extensions cannot be created to allow for this, so if you could make some third-party add-on library to fulfill that need, perhaps that's one solution.

Basically, I can see alternative libraries having a hard time with adoption if they try to do the same thing that OpenGL does, but in a less elegant way and cause everyone to relearn everything with no definite improvements, sure. But, if there's a need here that OGL isn't providing for, that's a definite area in which another library could really shine and provide something useful, is the hope.

When life gives you lemons, make lemonade, basically, I was just throwing out some suggestions as to possible ways to do so.

Ilian Dinev
08-16-2008, 02:43 AM
Let's say Nouveau (rev.eng. nVidia driver) succeeds and ATi opens-up full specs for their cards. Everyone has that new library and drivers for current gpus. Then, a new gpu comes-out from nV/ATi/I, with either a slightly different interface, or the gpu works and is programmed in a completely different way. That gpu is in stores, you like how faster it is and you buy it. Just to find your computer (with whatever OS) crash on you. If N/A/I don't give specs to driver-developers, it can take years for you to start using that videocard! If N/A/I do give specs, it can take months to boot the card. If N/A/I give specs, prototypes/devkits early enough... then they will always have to be completely honest with the developers. And from there, the end-users will know every fault of a gpu or its IHV before it comes-out.
And that last thing is unacceptable to IHVs. So, they won't be at ease when giving-out info constantly. The only remedy would be to support that new 3D-interface/library themselves, closed-source. So now, they have 3 libs to support, each being nice with the others, and working nicely in multitasking and multicore. Ouch! Thus, OpenGL extensions are preferable for them.

HenriH
08-16-2008, 08:22 AM
OpenGL is the way to go if it offers what developers and end users want, sure, agreed, but if functionality is missing, maybe a new library could give developers this without replacing the entire system, as a simple compliment to OpenGL. I'm assuming open community OpenGL extensions cannot be created to allow for this, so if you could make some third-party add-on library to fulfill that need, perhaps that's one solution.

Basically, I can see alternative libraries having a hard time with adoption if they try to do the same thing that OpenGL does, but in a less elegant way and cause everyone to relearn everything with no definite improvements, sure. But, if there's a need here that OGL isn't providing for, that's a definite area in which another library could really shine and provide something useful, is the hope.

When life gives you lemons, make lemonade, basically, I was just throwing out some suggestions as to possible ways to do so.

When you are talking about a library on top of OpenGL, this library would not of course be hardware accelerated. IHVs would have to build driver support for this library of yours in order it to be. And this is unlikely to happen.

While your idealism is nothing but admirable, there are already numerous third-party libraries out there to "extend" the basic functionality of OpenGL, and many of them are in fact open source. So you are not talking about anything new. Without hardware support from IHVs, it would just be a yet another OpenGL utility library.

Edit: Check something from the Internet called "Panard Vision". This is something which you have been talking about. It has/had a low-level rendering API that layers on top of OpenGL or Direct3D and a high-level game engine. There used to be a web portal (www.panardvision.com) but it seems not to be functional anymore. As I see this matter, something like this is redundant.

Yfrwlf
08-16-2008, 10:16 AM
Thus, OpenGL extensions are preferable for them.

You're saying that if AMD was completely closed, they wouldn't want to support yet another library. I agree, no one wants to support 500 libraries, and I wouldn't want them to, but if library #3 supported a feature that was liked, used, and seen as very important, they might need to until OpenGL adopted a solution to replace it, or until this third library replaced OpenGL.

A completely open source graphics library catching on and becoming used isn't impossible. Everyone deals with open source programs every day. Khronos is coming out with OpenCL among other things, so a community coming out with their own library to take care of their own needs is definitely not impossible.


When you are talking about a library on top of OpenGL, this library would not of course be hardware accelerated. IHVs would have to build driver support for this library of yours in order it to be. And this is unlikely to happen.

Well it wouldn't be mine, it would have to have wide adoption and be important enough to have some open source driver adopt it before finally being adopted by the closed drivers as well, would be my guess.


While your idealism is nothing but admirable, there are already numerous third-party libraries out there to "extend" the basic functionality of OpenGL, and many of them are in fact open source. So you are not talking about anything new. Without hardware support from IHVs, it would just be a yet another OpenGL utility library.

Alright, cool, so if OpenGL is really lacking then I hope one of the existing libraries or something new fills in the holes and that the open and closed source drivers acknowledge it and that new hardware also supports it if need be, either permanently or until OpenGL comes up with a satisfactory solution for it. All it takes is word spread of it's importance and it's adoption before support will come. Of course AMD and NV will be interested if it starts getting used and is seen as the current solution.

I'm pretty sure I'm aware of all the challenges, I just don't think it's impossible for them to be overcome like others do on this board. Perhaps what is needed for adoption of a "third" library is a slip up of OGL, so that if you create something that does offer what OGL cannot, it will really shine and actually become adopted. I sure hope Khronos wouldn't sit back and let themselves be outdone, but if no one is too optimistic that they will implement those changes this century, why sit back and wait? I'm sure Microsoft isn't going to, because they don't own/control 90% of the GL pie, which seems to be around where they like being.

HenriH
08-16-2008, 10:56 AM
Alright, cool, so if OpenGL is really lacking then I hope one of the existing libraries or something new fills in the holes and that the open and closed source drivers acknowledge it and that new hardware also supports it if need be, either permanently or until OpenGL comes up with a satisfactory solution for it.

I think you are missing the point here.

There are utility libraries which extend OpenGL beyond it's basic functionality, such as GLU. Game engines and frameworks usually implement some sort of wrappers too to make the use of OpenGL more convenient for the rest of the development. AMD or NVidia will obviously not implement any sort of drivers for these systems.

A new utility library could be implemented that would contain functionality that you say people consider OpenGL is lacking. What this would be?

What OpenGL is lacking in my opinion is a clean and streamlined API. As I see it, the complexities of the API makes it hard for IHV's to implement drivers. The planned Object Model would have improved the situation great deals, but it was not to be. You could model this new utility library that you talk about after the proposed Object Model and layer it on top of OpenGL 3.0 calls but it would not really matter as it would not lift the burden of driver implementation from IHVs a bit.

And it would also be in vain to expect IHVs to implement hardware drivers to this utility library as they are working hard on the next version of OpenGL.


All it takes is word spread of it's importance and it's adoption before support will come.

This is day dreaming, I would say. I don't see that an OpenGL utility library would gather that wide adoption at all.


I'm pretty sure I'm aware of all the challenges, I just don't think it's impossible for them to be overcome like others do on this board. Perhaps what is needed for adoption of a "third" library is a slip up of OGL, so that if you create something that does offer what OGL cannot, it will really shine and actually become adopted. I sure hope Khronos wouldn't sit back and let themselves be outdone, but if no one is too optimistic that they will implement those changes this century, why sit back and wait?

I can't help myself but to admire your idealism and youthful enthusiasm, but lets still be realistic and keep our feet on the ground. I recommend you to check out that Panard Vision and study on Fahrenheit et al.

Carl Jokl
08-16-2008, 12:00 PM
Would it be possible for the deprecated features in OpenGL 3.0 to be migrated from core OpenGL to the GLU Library. If these features have been earmarked for removal because they are unlikely to be implemented in hardware but the CAD developers don't want to loose them because they rely on them I wonder if these functions could not be moved to the GLU Utility Library and implemented as software. In practice if may of these features were never implemented in Hardware in the first place it would be just like making it official that these functions are only going to be software. The CAD developers would have to do a little alteration to their code but at least the rug would not be pulled out from under them. Conversley it might be easier to remove the deprecated functionality more quickly if there is a cussioning measure such as this put in place.

The other idea which I think may well have been mentioned already but I will mention it again if not is to have 2 levels of the OpenGL implementation. An OpenGL Core implementation and an OpenGL Extended implementation. The OpenGL core implementation would cover the core graphics functionality which should be available and be concice enough to be implemented on regular desktop computer graphics cards. The extended implementation would contain the core implementation and add features which higher end graphics workstations and render stations need. These are features more likely to be implemented by high grade workstation graphics cards. The driver writers then for the desktop have a simplified API to implement hopefully simple enough to be implementable in hardware. The special needs of the workstation graphics users are still catered for with a set of extra features which may be implementable of high end workstation graphics cards.

Programatically there could be a mechanism put in place to check whether the OpenGL platform on which you are developing is just the core version or the extended version. The core api is still the same for both. Hopefully these core features can be refined and improved at a faster rate than perhaps the extended features as the gaming industry being fast paced would be developing only against OpenGL core where the more slow paced CAD industry would develop against the extended API. This would have a practical implication that CAD might no longer run on regular desktop hardware and require workstation grade graphics cards (many entry level workstation graphics cards are still in the price range of higher end gaming cards). I don't think this requirement is neccecarily so limiting given that the industries which use CAD may well be used paying AutoDesk through the nose for their software and the cost of a workstation graphics card may pail in comparrison to that.

HenriH
08-16-2008, 12:26 PM
GLU should definitely be updated. Stuff like matrix stacks and fixed-functionality implemented as a vertex and fragment shaders would be good candidates.

V-man
08-16-2008, 05:57 PM
So Khronos and Microsoft are the only two graphics API/library gods in existence, and there can be no third completely open source solution which isn't at the mercy of a single entity? I beg to differ. There's always a way. While it may be an uphill struggle, where there's a need there's a way.

What are you talking about? Who wants a 3rd in this dual game of Khronos and Microsoft?
I sure don't.
Why do you mention open source?
I don't need a open source driver.
The community here at opengl.org are mostly software developers and are not likely to spend time understanding the details and interactions of the OS and hardware. I should say multiple OSes and multiple GPUs.

dorbie
08-16-2008, 11:35 PM
I'm pretty sure I'm aware of all the challenges,

I respectfully but strenuously disagree.

dorbie
08-17-2008, 12:18 AM
Well software matrix operations and a stack would be useful, but shaders? It wouldn't be anywhere near worth the overhead and what hardware resources are you gonna burn for your stack, and how are you going to efficiently store your results back in uniforms for other programs? Threads have to be compiled and at least scheduled on the GPU (the instructions are cached) and matrix operations don't scale like vertex transformation, you have a few vector operations at a time and they're inherently serialized. There's not a lot of work to be parallelized. I think generic register (uniform) stacks would be genuinely useful for this but are probably impractical with scale, but it depends on architecture.

However there's definitely a class of usage that benefits from canonical transformation matrices, offloading that from the core graphics pipeline at best complicates graphics utilities that need to interoperate in meaningful ways. Even if the pipeline uses software.

Matrix libs abound, and probably belong in software (which also has it's own vector instructions let's remember) for most usage these days. I think it's a loss for students, code interoperability and convenience, but that's all. I personally have an aversion to using GLU, it always feels like it's the dumbed down red headed step-child (my aversion started with gluLookAt, which is overused :-)), but maybe I'd use it for a matrix lib, on the other hand I have matrix code already as does just about every other graphics hacker. It's not heavily optimized and I haven't given it too much thought, I may have to now.

It's worth losing this for the cleanliness and generality of the API. Go look at the OpenGL ES 2.0 API and glslang. After your initial loss you realize that this is closer to how things would have been done if you built this in a cleanroom without the legacy. Atributes and uniforms have no special status except the function given to them by your shaders. Now you could always have(mostly) overloaded these in the past (or even ignored it), but now it's inkeeping with a normal programming language. When you see that you wouldn't want anyone stinking up the namespace with legacy tokens provided the usage can be optimized for in the compiler. Of course now you can get some fool hand you a shader with variables called a1 a2 etc. (as a friend mentioned the other day) but bad coders have always been around, now they have an invitation to obfuscate shader code too.

PaladinOfKaos
08-18-2008, 09:16 AM
The best solution is, of course, to have the IHVs contribute to the open-source drivers. Intel already does this with their audio, networking, and graphics drivers (although I believe the GFX drivers are released obfuscated).

If every IHV contributed an open-source backend to, say, Gallium (and I know I mention Gallium a lot, but DRI sucks, and Gallium can theoretically be run on any platform), then any new API could be created and layered on top of Gallium (which is what Gallium is designed to allow).

And for those who don't know: Gallium runs a lot like direct-x. There's a kernel-mode driver that exposes a uniform API to the runtime. That API could (assuming you can get the drivers signed by MS and APL, which I think we all agree is unlikely or impossible) be created on any platform. Unlike DX, however, that mid-level API is known, so multiple runtimes can take advantage of it. OpenGL, OpenVG, even a native Linux DirectX implementation. All you need is the runtime and an X extension to support the context creation.

PkK
08-18-2008, 12:28 PM
The best solution is, of course, to have the IHVs contribute to the open-source drivers. Intel already does this with their audio, networking, and graphics drivers (although I believe the GFX drivers are released obfuscated).


Download the latest Mesa tarball. Look at src/mesa/drivers/dri/i965. It doesn't look obfuscated. It even has some comments explaining the code.

Philipp

barthold
08-18-2008, 04:38 PM
The slides from the OpenGL BOF are now available:

http://www.khronos.org/library/detail/2008_siggraph_opengl_bof_slides/

Barthold

Brianj
08-18-2008, 08:28 PM
Is there any word on OpenGL 3 tutorials from the ARB? Cause not everyone can plunk down $3000 for a five day course.

dor00
08-18-2008, 09:48 PM
Is there any word on OpenGL 3 tutorials from the ARB? Cause not everyone can plunk down $3000 for a five day course.

Well, one thing i really expected from OpenGL 3 was a reall SDK to come with. Sadly, we still dont have that. Maybe thats why DX grass is greener for some peoples at the moment:(

RazielNZ
08-19-2008, 12:26 AM
When I picked up the 3.0 spec, I thought, "excellent, two years have gone into making a shiny new API"... having read it, it seems all we have is even more extensions on top of OpenGL 2. Is this crap really OpenGL 3.0? This is just unacceptable, we waited two whole years, and what? We're supposed to keep using the same API we've been using the last 15 years along with all of its problems, no proper SDK, and not complain?

My expectations were to have a new API started from a blank piece of paper - a chance for OpenGL to really redesign itself into a modern API. I also thought it would finally be accompanied by a proper SDK with headers and libraries that just worked so long as you are not using extensions, instead, we MUST use extensions just to access even the core OpenGL 3 functionality. I figured the new SDK would also have a cross-platform way to get the extensions, so at the very least, if I did choose to step into the realm of extensions, at least I could do it in a cross-platform manner that is officially supported by the SDK itself. The SDK I envisioned also had proper (up-to-date) documentation for programmers, not cryptic specs for IHV's and ancient reference manuals for version 1.3. It would also have a wealth of tutorials and sample code - I know it is really difficult to understand (sarcasm) but look at Direct3D and just about every other API's SDK on this planet made in the last 10 years for an example of what I'm talking about.

I also expected fast turn-around from our new shiny API and the khronos group, none of this, 2 years and we still give you absolutely nothing crap... I was wrong to have high expectations of the khronos group, this spec is one huge disaster, and you did not help OpenGL at all. For all I care, OpenGL can cease to exist now, I'm sick and tired of putting up with it. Simply put, OpenGL is one huge mess, it has been for the last 7 years now, we don't need even more extensions, we need a proper support group that will go back to its roots, fix it, and modernize it into a good, modern API that is easy to use and accessible to everyone.

Two years have just been wasted, OpenGL is even more ancient now, Direct3D is looking like a pretty attractive, modern API. To try and combat this, I propose that we do the following:
1) Rename OpenGL 3.0's spec to OpenGL 2.3 - this sticks to the idea of a 3 release, but clarifies that it is merely a small extension of OpenGL 2.
2) Break away from the khronos group and get rid of anyone who had anything to do with the OpenGL 2.3 spec, thus making sure they don't have any input into the direction of the true OpenGL 3.0 spec... I'm sorry, but you guys clearly do not know what the software developers who actually have to use your APIs want. You also failed to understand how important it is to make a modern OpenGL API we can all use for the next few years without feeling like we're stuck in the 80s.
3) remove the deprecations from OpenGL 2.3 and just leave all the old functionality where it is - I'm sorry, I can see where you were trying to go with this, but you fail... OpenGL 2.x should just be 'that thing' video card manufacturers support for legacy applications. Ideally, a group would be set up to allow older applications to take advantage of OpenGL 3's features via extensions, very much like OpenGL 2 today.
4) Finally, a proper SDK needs to be developed, starting with good user documentation, new libraries and new headers for OpenGL 3.0 that does not require the use of extensions for any core functionality, this SDK also has to be maintained, there is no final spec release made before the SDK is updated as well...

Well, this is my view on the only way to fix this mess. If this doesn't happen, I will simply drop all support for OpenGL, and I expect that everyone else will probably do the same. It wouldn't surprise me if this specification release as it is causes the end of OpenGL (gee, thanks khronos group).

Things have to be corrected, I have just summed up in 4 points what the monkeys in the khronos group couldn't even decide with two whole years on their hands. Well, this is what needs to be done for OpenGL to become a moden API, as it is, I refuse to accept OpenGL 3.

dor00
08-19-2008, 12:58 AM
As i said in other topic, i was expected too a completely new API + SDK (real SDK, i am tired of "community links").

Khonos MUST seriously stop a bit and reevaluate themselves.

Korval
08-19-2008, 01:21 AM
Considering that no real SDK was promised (certainly not after providing the fake one), there was no rational reason to expect a new, real SDK. They certainly never promised one.

Longs Peak was a promise: a declared intent that they were making progress on. They redacted that, and cut contact for 9 months because they knew how we'd react. By all means, take them to task for that. But they never promised a real SDK, so their not delivering is meaningless in that sense.


I expect that everyone else will probably do the same.

What "everybody else"? Most developers who could have abandoned OpenGL did so years ago. GL is the only thing that Linux and MacOSX have for hardware 3D access. And it's the only choice for cross-platform 3D development.

Nobody uses OpenGL because they want to. They use it because they have no other choice.

RazielNZ
08-19-2008, 01:42 AM
there was no rational reason to expect a new, real SDK. They certainly never promised one.
And this is partly why OpenGL 3.0 fails. Every other API has an SDK, if not that, they AT LEAST have proper documentation. They don't request that people use hacks to access newer functionality. I consider the whole extension mechanism to be one giant hack, and I don't care what anyone else thinks, it's ugly as hell whichever way you look at it.


Longs Peak was a promise: a declared intent that they were making progress on. They redacted that, and cut contact for 9 months because they knew how we'd react.
Don't you mean a lie? If they knew how we'd react they probably should not have released the spec and just given up and stepped down from the group before this disaster.


Nobody uses OpenGL because they want to. They use it because they have no other choice.
Don't you think there is something wrong here??? It should be an API people want to use... I really can't believe that people are happy with things staying like this, OpenGL really could have been a great API, oh well, guess it's over though.

If the ARB wants to try and save OpenGL, I suggest you follow through with my suggestions, 3.0 becomes 2.3, and a group who is actually serious about the future of OpenGL should draft up a true 3.0 spec for the community to comment on. It is not too late to withdraw the 3.0 spec.

dor00
08-19-2008, 01:53 AM
What "everybody else"? Most developers who could have abandoned OpenGL did so years ago. GL is the only thing that Linux and MacOSX have for hardware 3D access. And it's the only choice for cross-platform 3D development.

Nobody uses OpenGL because they want to. They use it because they have no other choice.

You have right, check that evil idea: "DirectX is better supported and have a real sdk and good development tools, lets stop the linux/mac support." or "Lets migrate to Windows".

I wonder why linux developers and real supporters are so quiet, maybe dont realize the impact of gaming on linux future.

JoeDoe
08-19-2008, 02:25 AM
Yes, new API may help Linux to become a far more popular OS, because games which supported OpenGL 3.0 render can run well on this OS. Thus a whole Linux community receive nothing with this GL "3.0" release.

RenderBuffer
08-19-2008, 02:47 AM
I wonder why linux developers and real supporters are so quiet, maybe dont realize the impact of gaming on linux future.

I realize that the members of the ARB are working hard and trying to do what they see as best for the community, and I appreciate their hard work. Of course I'm disappointed; I expect that many members of the ARB are too. It's not helpful to get angry with them, and I think a number of people feel the same way.


It is not too late to withdraw the 3.0 spec.

The naming convention is somewhat arbitrary, and as far as I'm concerned this could be called OpenGL 1.7 or OpenGL 1700. The next revision may not be called "OpenGL" at all. "Fixing" the revision number will not alleviate misgivings, which are about the API itself.

skynet
08-19-2008, 03:44 AM
None of the slides contains even a single word on the fate of LP. No "what went right, what went wrong", no execuses... All presentations sound like "Everything went as planned, we are happy with the outcome!". :-/

RazielNZ
08-19-2008, 04:59 AM
The naming convention is somewhat arbitrary, and as far as I'm concerned this could be called OpenGL 1.7 or OpenGL 1700. The next revision may not be called "OpenGL" at all. "Fixing" the revision number will not alleviate misgivings, which are about the API itself.

This is not true at all... People were expecting the 3.0 API release to fix things and make OpenGL a modern, competitive API... instead, it just proved that the OpenGL ARB does not have any intentions of moving the API into the future and shows that our ARB is now overrun by morons with legacy applications that think that avoiding a proper update to the spec will allow them to keep telling their customers that their software is state of the art (because it uses the latest OpenGL)... well... sorry, but now the entire API is ancient and stuck in a phase where it can't move forward.

So it is in fact a major thing, do you think any 3.x release will suddenly modernize the API? Or give us a proper SDK? The answer, no... if this is the final 3.0 spec, basically, we're screwed, and so is OpenGL, because it means that the API won't modernize for at least the next 5 years until the 4.x streams.

By renaming the spec to 2.3, it will essentially tone down its importance, allowing for a proper 3.0 spec to be released by people who are actually serious about the future of the API.

Yes... thanks to the Khronos group we have just lost two years - catching up with Direct3D at this point would be near impossible. However, perhaps it is not too late to save OpenGL. But in order to do this, we need to assemble a group who are serious about bringing OpenGL into the future. This group would be tasked to come up with a spec which will be good for all of the 3.x releases, which can easily be the next 5 to 10 years so it really needs to be done right the first time.

As a result, it would be advisable to regularly release draft specifications to the public for commenting as well.

tsuraan
08-19-2008, 06:10 AM
I wonder why linux developers and real supporters are so quiet, maybe dont realize the impact of gaming on linux future.

As a linux user (and developer), I can say that there's probably no outcry from us because we really don't expect much. We're pleasantly surprised when non-open hardware drivers don't cause daily kernel panics, and absolutely overjoyed when somebody actually publishes a real commercial game for us to play (was neverwinter nights the last non-fps for Linux?). We certainly don't expect any standards bodies to be going out of their way to help us out, and I'm guessing that every linux user that plays video games either owns a windows machine for that purpose, or a console. Linux gaming has been dead for much longer than OpenGL 3.

cignox1
08-19-2008, 06:48 AM
I wonder why linux developers and real supporters are so quiet, maybe dont realize the impact of gaming on linux future.

As a linux user (and developer), I can say that there's probably no outcry from us because we really don't expect much. We're pleasantly surprised when non-open hardware drivers don't cause daily kernel panics, and absolutely overjoyed when somebody actually publishes a real commercial game for us to play (was neverwinter nights the last non-fps for Linux?).


I suppose then that this cold have been a big step toward linux gaming: if OGL 3.0 was so nice and powerful to move developers (at least some), then porting would have been far easier and perhaps even rewarding... Linux distributions should be interested in this, since there are so many people that wont switch to linux just because games...

tsuraan
08-19-2008, 08:08 AM
I suppose then that this cold have been a big step toward linux gaming: if OGL 3.0 was so nice and powerful to move developers (at least some), then porting would have been far easier and perhaps even rewarding... Linux distributions should be interested in this, since there are so many people that wont switch to linux just because games...

I think if OpenGL 3 had been as nice an API as everybody says D3D10 is, plus having full support for modern video cards (geometry shaders, etc) under Windows XP, that would probably have been enough to get more people using it. Linux users would possibly benefit from that, but there's the rest of the Windows infrastructure that many developers seem to like. Of course, Wine tends to run OpenGL-based games much better than D3D games, so just having games done in OpenGL instead of D3D would be a good step for Linux.

I don't know who the ARB listened to, but it doesn't look like they were listening to the game developers. I doubt that they would have had much interest in the opinions of the Linux distributions either, given their current track record.

Mars_999
08-19-2008, 08:50 AM
Sigh, people stop crying about GL3 in these threads, its over, move on, deal with it, yes they heard the out cry from us. Maybe they will do something about it, maybe they won't. Your choice is simple use it or move to DX10.

It's like a break up with one of the partners, but one of them still thinking they are going out!!

Auto
08-19-2008, 09:58 AM
Well I'm happy - mainly because of EXT_framebuffer_blit and the relationship to multi-sampling and FBOs, as, if I'm not mistaken, allows custom resolve order in OpenGL. This is something I've been waiting on for ages, and looks great.

Generally all draw calls for my code are wrapped anyway so I'm not really fussed about graphics APIs. Working with consoles cuts through any sort of API syntax wishlist altogether really, so nowadays I'm more interested in features than how nice an API is.

In fact I would prefer it if HW vendors could get together and provide a universal push-buffer format API than be messing about with all this high level stuff, though that may well just be an intermediate option anyway.

Having attended Siggraph this year, if Intel does actually pull the Larrabee project off with competitive performance when the HW ships, I think we may well see the real-time graphics world change quite considerably in the next few years.

I'm saying that in the context of this 'if': If performance is good, and other HW vendors feel the need to compete in terms of programmability, then it's reasonable to assume that both GL and DX will eventually become history as more people get access to the hardware and write their own custom renderers.

How long this takes is another thing, but I think Intel have their eyes in the right direction and appear to be moving quite swiftly.

But anyway - digressions aside - thanks for the extension, very useful.

Leadwerks
08-19-2008, 11:10 AM
It's like a break up with one of the partners, but one of them still thinking they are going out!!
That is the best analogy I have ever heard. Khronos just told us they're not that into us. They'll still have sex with us, but they don't want to take us out on any more dates.

We're either going to move in the direction of adding cross-platform support for Mac and possible Linux, or focus on DX11 and maybe whatever new console MS comes out with next. If OpenGL 3.1 turns out well, we might go cross-platform, but Khronos has consistently proven to be an absolute failure with everything they do. I have never criticized Collada on this forum because of Khronos, but that is another absolute joke from the same failures. I do not expect them to suddenly start doing things right after so much failure.

There is also nothing that makes me AMD will ever have working 3.0 drivers. All the new extensions, plus support for all the old garbage?! They don't even have working 2.1 drivers. AMD has been pretty good about fixing bugs from my experience with them, but I think the 2.1/3.0 spec is too complicated to expect working drivers, and NVidia's support is an exception. Working drivers for 3.1 (as it is described now) would be much more likely.

So the situation is exactly the same as before: Only NVidia fully supports OpenGL, AMD does halfway, and Intel's drivers are a joke. Khronos managed to spend two years doing absolutely nothing, and then put out a spec where their main feature is a list of promises. It's brilliant, in a way. If you told me I had to write a spec that changes nothing, but it had to look like progress was being made, I wouldn't have come up with that.

All Khronos does is take other people's ideas, put their logo on it, screw it up, and call it an "industry standard". Since they get paid by Sony instead of selling a product, the market forces that would have put them out of business long ago do not apply. So I think we will continue to see astounding incompetence and OpenGL 3.1, as it is described now, will never come to be. They get paid no matter what, so they'll just keep spending Sony's money and doing no work until they get cut off from their source of funding.

I think the most accurate assessment of this news is that OpenGL is no longer being developed.

Chris Lux
08-19-2008, 01:44 PM
Well I'm happy - mainly because of EXT_framebuffer_blit and the relationship to multi-sampling and FBOs, as, if I'm not mistaken, allows custom resolve order in OpenGL. This is something I've been waiting on for ages, and looks great.
no, custom resolves are not supported currently. custom resolves mean that you can access all subsamples in a shader and do the downsampling yourself after for example tone mapping etc.

ebray99
08-19-2008, 01:52 PM
For starters, I disagree that OpenGL3 is a failure; I don't think it's flawless by any means, but I don't think it's a failure. It actually accomplishes several of the goals it set out to do, but in a less aggressive manner. One of the things that was mentioned at the Siggraph BOF was the mandatory support of DX10 competitive functionality for a true OpenGL 3.0 implementation. Also, the direct_state_access extension is required too. That said, the following goals are accomplished...

1.) Faster object creation and state changes promised by the new object model.
- vertex_array_objects provide atomic an optimized path for setting up vertex-buffer/vertex-array state.
- direct_state_access allows for fast creation and modification of texture objects, buffer objects, framebuffer objects, and other resource types.

2.) DirectX 10 competitive functionality.
- for someone to have an OpenGL 3.0 implementation, geometry shaders must be supported.
- render to vertex array must also be supported.
- texture arrays must be supported.
- plus more.

3.) Versioning.
- the new context creation mechanism allows for solid versioning. It simply limits the scope of an OpenGL specification for defining (and implementing) interactions with older functionality. For instance, it makes very little sense to have interactions defined between geometry shaders and fixed function, since fixed function is an extremely limited form of defining a shader. OpenGL 3.0 is compatible all of the way back to 1.0, but the next version will most likely be compatible back to 2.1, or something similar.

Failures:
The biggest failure of the ARB was to limit the backwards compatibility of 3.0. However, with the new revisioning system, I think that the next version will limit the scope just fine. I also got the impression that current IHVs simply didn't want to break backwards compatibility in 3.0. I think we'll see 3.1 as functionally identical to 3.0, but without most of the backwards compatibility.

Also, the object model didn't make it in, but I think they still managed to solve most of the performance issues that the object model was meant to solve. That said, I wouldn't see this as a total failure, or even a failure at all.

The biggest issue, and I think the one that people are the most upset about is the lack of commitment to an original goal. People planned for a new object model, bought in to it, and then got something completely different. However, in the end, the API that we were presented with is totally workable, and solves most of the issues the original design was supposed to solve. However, from the sound of it, I guess my desires from an API might be different than other people's.

Also, at the BOF, ATI/AMD stated that they would be releasing OpenGL 3.0 drivers by Q1 2009. As for the quality of them, I'm not sure, but they made a public announcement with respect to their driver plans (which is historically very rare for them). That said, I would guess that ATI/AMD is fully behind OpenGL 3.0 and is very willing to throw resources at it.

Kevin B

Timothy Farrar
08-19-2008, 01:53 PM
First off, how many of you that are bitching about GL, actually program in DX also?

Second, how many actually tried profiling the performance difference between a properly programmed GL version of the same engine and a properly programmed DX9 or DX10 version on good drivers (ie the newest ones from NVidia)?

Got numbers?

If you don't have specific documented performance problems then my guess is that you just must be complaining that the current GL API is too hard to use compared to DX? Which is really stupid because with either you can get great performance, and both GL and DX interface should be something like a tiny fraction of your code base.

Now as regards to GL3. Have you even fully read the spec and understand the usefulness of what is now in GL3?

Do you realize that most game developers are still DX9 level (because of consoles, and because of Vista requirement of DX10), and that compared to DX9, GL3 offers a tremendous set of advantages (no contest really).

Besides, do you even know what GL3 is missing compared to DX10? Given that ATI has announced that it intends to support the GL3 extension pack, what is left?

Answer, primarily just two things: constant buffers and some state objects (GL3 has some of the more important ones already: VAOs and FBOs).

So really, GL3 is awesome!

There are just to many things the ARB got right this time. Check out MapBufferRange(), much better interface that what you get with DX9 and DX10. Check out the changes which have made it into the framebuffer interface (clean support for mixed format, mixed bit depth, etc). Etc, etc.

So if you want to keep complaining, do yourself a favor and try DX and post some numbers to backup what you are saying, otherwise how can you expect anyone to take you seriously?

Korval
08-19-2008, 02:24 PM
- direct_state_access allows for fast creation and modification of texture objects, buffer objects, framebuffer objects, and other resource types.

Direct state access is not only not core 3.0 functionality, it is not even written against the core 3.0 spec, so it doesn't wrap everything in 3.0.


- for someone to have an OpenGL 3.0 implementation, geometry shaders must be supported.

You realize that they aren't actually supported, right? Not in the 3.0 core.


First off, how many of you that are bitching about GL, actually program in DX also?

Wait, what? What does that have to do with anything? People don't have the right to complain about what they have regardless of whether they use the alternative?


Now as regards to GL3. Have you even fully read the spec and understand the usefulness of what is now in GL3?

Yes. I also read about what Longs Peak was going to be. I prefer Longs Peak. By far.


and that compared to DX9, GL3 offers a tremendous set of advantages (no contest really).

A decent API not being one of them, of course.

dor00
08-19-2008, 02:46 PM
First off, how many of you that are bitching about GL, actually program in DX also?

Second, how many actually tried profiling the performance difference between a properly programmed GL version of the same engine and a properly programmed DX9 or DX10 version on good drivers (ie the newest ones from NVidia)?

Got numbers?

If you don't have specific documented performance problems then my guess is that you just must be complaining that the current GL API is too hard to use compared to DX? Which is really stupid because with either you can get great performance, and both GL and DX interface should be something like a tiny fraction of your code base.

Now as regards to GL3. Have you even fully read the spec and understand the usefulness of what is now in GL3?

Do you realize that most game developers are still DX9 level (because of consoles, and because of Vista requirement of DX10), and that compared to DX9, GL3 offers a tremendous set of advantages (no contest really).

Besides, do you even know what GL3 is missing compared to DX10? Given that ATI has announced that it intends to support the GL3 extension pack, what is left?

Answer, primarily just two things: constant buffers and some state objects (GL3 has some of the more important ones already: VAOs and FBOs).

So really, GL3 is awesome!

There are just to many things the ARB got right this time. Check out MapBufferRange(), much better interface that what you get with DX9 and DX10. Check out the changes which have made it into the framebuffer interface (clean support for mixed format, mixed bit depth, etc). Etc, etc.

So if you want to keep complaining, do yourself a favor and try DX and post some numbers to backup what you are saying, otherwise how can you expect anyone to take you seriously?


You SIR miss something.

Before reading scroll back and read what i said about Linux/OpenGL.

Now, picture the new users. I am NOT talking about peoples who already know OpenGL very good, i am talking about new users who want to make games/3d stuff, they look at OpenGL 3 and they get:
- old api
- no documentation ( oh wait, some books for 50$)
- no technical support ( pls dont say comunity forums)
- no sdk
- incert future about drivers/versions
(and the list can continue)

They dont even care about 1-2% performance boost or the ultra high tek extension, got the point now???

ps. yeah i do d3d also. at least in d3d is only one way to load a texture (eg.) and is working on all cards. same as for almost all d3d core, is working.

ebray99
08-19-2008, 03:06 PM
Direct state access is not only not core 3.0 functionality, it is not even written against the core 3.0 spec, so it doesn't wrap everything in 3.0.
While you're correct, this must be supported in order to have an OpenGL 3.0 compliant implementation. How that works, I'm not exactly sure, but this was stated by the ARB at the BOF.


You realize that they aren't actually supported, right? Not in the 3.0 core.
The same goes for this as well. It must be supported in order to call yourself 3.0 compliant.

So while I'm not sure how something should be required to call yourself 3.0 compliant, yet not be in the specification, I'm not sure. I'm guessing this is just an initial implementation for community review. Can someone from the ARB explain this?

These were things discussed at the BOF. Whether or not these are mentioned in the OpenGL 3.0 specification PDF, I'm not sure.

Kevin B

Korval
08-19-2008, 03:08 PM
this was stated by the ARB at the BOF.

Let me get this straight.

The OpenGL Architectural Review Board, at the BoF, said that in order to have a 3.0 implementation of OpenGL, you must implement an extension that was written against OpenGL 2.1.

I'm sorry, no. That's total BS. Unless you can provide a link to a website or something, I'm calling shenanigans on that.

Same goes for geometry shaders, though at least that is an extension written against GL 3.0.


Whether or not these are mentioned in the OpenGL 3.0 specification PDF

They aren't. If it isn't in the GL 3.0 specification, it isn't in 3.0 core. That's what the specification is! It defines what is and is not core.

Timothy Farrar
08-19-2008, 03:14 PM
The gl spec, glsl spec, and extension specs are rather complete API documentation by themselves. Otherwise your about one google away from a tremendous amount of free easily accessible GL documentation and examples. Apple for one has a lot of extremely good GL documentation, such as, http://developer.apple.com/graphicsimaging/opengl/optimizingdata.html .

Driver bugs are not the responsibility of the ARB. It is the vendor's responsibility. You got issues with a driver, bring it up with the vendor.

dor00
08-19-2008, 03:17 PM
Driver bugs are not the responsibility of the ARB. It is the vendor's responsibility. You got issues with a driver, bring it up with the vendor.



Have fun asking ATI to fix the drivers problems.

Korval
08-19-2008, 03:24 PM
Driver bugs are not the responsibility of the ARB.

Actually, they are, partially. If OpenGL weren't needlessly complicated, there would be fewer driver bugs.

ebray99
08-19-2008, 03:24 PM
I'm sorry, no. That's total BS. Unless you can provide a link to a website or something, I'm calling shenanigans on that.

Same goes for geometry shaders, though at least that is an extension written against GL 3.0.

Does anyone who was at the BOF care to either back me up or correct me on this?


While I have not found documentation stating these extensions are required, I have found documentation stating that NVidia and AMD will both support them here:
http://www.khronos.org/developers/librar...BOF%20Aug08.pdf (http://www.khronos.org/developers/library/2008_siggraph_bof_opengl/Vendor%20Announcements%20SIGGRAPH%20BOF%20Aug08.pd f)


Kevin B

Korval
08-19-2008, 03:55 PM
While I have not found documentation stating these extensions are required, I have found documentation stating that NVidia and AMD will both support them here:

No, ATi said that they would be supporting the OpenGL 3.0 extension pack. Precisely what this means is unclear. It could simply be the "core extensions" that GL 3.0 brings out. That is, extensions to 2.1 that allow 2.1 applications to use certain core features of 3.0 (like VAO, etc). Another interpretation is that they would be supporting all of the ARB extensions revealed with 3.0 (things like geometry shaders and instanced rendering). But direct state access is not one of those.

Basically, it's up to interpretation as to what ATi intends to support. But I wouldn't get my hopes up for more than just the core extensions. ATi is pretty strongly against implementing anything more than the bare minimum necessary for OpenGL. And that includes ARB extensions.

Furthermore, there's a big difference between a few implementers who say that they're going to support something and saying that GL 3.0 support requires that something.

ebray99
08-19-2008, 04:08 PM
In the context of the BOF, the "extension pack" referred to is all of the extensions listed in the slides, plus a couple more. I understand your skepticism, but certainly hope that you're wrong with respect to your statements, otherwise I'd think I was intentionally misled.

It would be really nice for someone else who was at the BOF or even someone on the ARB to come in and either correct me or verify what I'm saying. =)

Kevin B

Korval
08-19-2008, 04:20 PM
In the context of the BOF, the "extension pack" referred to is all of the extensions listed in the slides, plus a couple more. I understand your skepticism, but certainly hope that you're wrong with respect to your statements, otherwise I'd think I was intentionally misled.

"plus a couple more?" I could understand the assumption that the extensions on the slides would see widespread implementation, but where do you get the "couple more" from?

As for being intentionally misled, isn't that just par for the course with the ARB, who've been doing it for the last 9 months?

dorbie
08-19-2008, 04:29 PM
As for being intentionally misled, isn't that just par for the course with the ARB, who've been doing it for the last 9 months?

Dude, get over yourself.

ebray99
08-19-2008, 04:30 PM
"plus a couple more?" I could understand the assumption that the extensions on the slides would see widespread implementation, but where do you get the "couple more" from?

The couple more I'm referring to were simply mentioned during the BOF presentation and not on the slides.


As for being intentionally misled, isn't that just par for the course with the ARB, who've been doing it for the last 9 months?

I personally don't feel like I was misled at all during the whole GL3 thing. I think they failed to communicate effectively with the community, and I think most of this backlash is a result of their prolonged silence. Basically, they didn't mislead anyone; they just didn't lead anyone anywhere. That said, I don't think the API itself is a failure in any way. Their PR on the other hand could use a little work.

Kevin B

Brolingstanz
08-19-2008, 04:39 PM
And as was pointed out in the slides, the main obstacle to GL's evolution in recent years has been the difficulty in layering new functionality on top of an aging API, a difficulty which has now been put to pasture, thanks to the new deprecation model.

Onward HO! Mush! Muuuush! *indistinct barking and whipping*

skynet
08-19-2008, 05:00 PM
Lifecycle of a feature:
1. become an extension
2. become core feature
3. become deprecated, but still be in core
4. removed from core, but maybe existing further as extension (the same as in 1.?)
5. finally die out

The single steps are not bound to a particular version in the API. Now imagine N features with overlapping, 'phase-shifted' lifecycles... must be a nightmare to manage.


the main obstacle to GL's evolution in recent years has been the difficulty in layering new functionality on top of an aging API,

Not being a native speaker, I'm a bit unsure how to interpret "put to pasture". But I really want to know why NV and ATI didn't revolte more against the current API and say "enough! we better start over with a clean sheet of paper."

Revolution through 3volution!
(one might think, it took the last 8 months to invent that motto)

Brolingstanz
08-19-2008, 05:14 PM
[A race horse is "put to pasture" when it can no longer compete (cf. retired).]

Leadwerks
08-19-2008, 08:08 PM
Deprecated = Khronos intends to do something about it, someday. We already knew what features were "deprecated". Saying you intend to do something is not the same as doing it.

In any case, the entire problem all along has been AMD's and Intel's drivers. I am very skeptical about their ability to produce working drivers, since OpenGL "3" does not make that task any simpler.

Ilian Dinev
08-19-2008, 09:32 PM
First off, how many of you that are bitching about GL, actually program in DX also?

Second, how many actually tried profiling the performance difference between a properly programmed GL version of the same engine and a properly programmed DX9 or DX10 version on good drivers (ie the newest ones from NVidia)?

A commercial app of mine runs faster and smoother under OpenGL. Some research on instancing also fared better. I just love the separate FIFOs and ARB shaders (compiled via Cg).

Auto
08-20-2008, 02:24 AM
no, custom resolves are not supported currently. custom resolves mean that you can access all subsamples in a shader and do the downsampling yourself after for example tone mapping etc.

That's slightly confusing as the EXT_framebuffer_multisample
spec states:

"...the application explicitly controls when the
resolve operation is performed. The resolve operation is affected
by calling BlitFramebufferEXT (provided by the EXT_framebuffer_blit
extension) where the source is a multisample application-created
framebuffer object and the destination is a single-sample
framebuffer object (either application-created or window-system
provided)...
"

Is this not the same? It would seem this is the kind of exposure I'm looking for...

Xmas
08-20-2008, 02:27 AM
Is this not the same?
No, because you only control when the resolve operation is performed, not how.

bobvodka
08-20-2008, 02:28 AM
no, that just forces the resolve to a non-AA'd texture/render target like a normal texture.

What Chris Lux is talking about is being able to access the AA'd texture directly and the subsamples of it.

Auto
08-20-2008, 02:38 AM
Ah... so is it that you can't actually read an msaa fbo as a texture then?

[Edit]

Having just done some RTFM-ing of the extension spec:

"(3) Is ReadPixels (or CopyPixels or CopyTexImage) permitted when
bound to a multisample framebuffer object?

RESOLVED, no

Resolved by consensus, prior to May 9, 2005

No, those operations will produce INVALID_OPERATION. To read
the contents of a multisample framebuffer, it must first be
"downsampled" into a non-multisample destination, then read
from there. For downsample, see EXT_framebuffer_blit.
"

Hence I presume this answers my question...

So to roll your own resolve with this method (that is, hack), would it be possible to do a framebuffer blit to a non-msaa rendertarget with nearest filtering, but say, with the target fbo resolution the same size as the msaa target (ie: 4x) to minimize the downsampling effect. Then do your post, and your own shader resolve after? May be a bit dodgy but possibly the closest thing to a custom resolve...

Don't Disturb
08-20-2008, 04:36 AM
No. When you resolve a multisampled framebuffer the size arguments to BlitFramebufferEXT are ignored.
But even if that wasn't the case you wouldn't get the individual sample colours.
OT anyway

bertgp
08-20-2008, 06:36 AM
But I really want to know why NV and ATI didn't revolte more against the current API and say "enough! we better start over with a clean sheet of paper."

The way I understand it is that the deprecation model states that new features do not need to specify interactions with deprecated features. Let's say for example some kind of state object extension (which would be core later) is created; it will not need to handle interactions with all the deprecated features. This does simplify things quite a bit. Having read a few extension specifications, I can attest that much of their complexity is caused by interactions with the immediate mode.

Auto
08-20-2008, 07:57 AM
No. When you resolve a multisampled framebuffer the size arguments to BlitFramebufferEXT are ignored.
But even if that wasn't the case you wouldn't get the individual sample colours.
OT anyway

Right ok, well that's torn it then hasn't it. Oh well thanks for the replies anyway.

Michael Gold
08-20-2008, 09:28 AM
No. When you resolve a multisampled framebuffer the size arguments to BlitFramebufferEXT are ignored.
But even if that wasn't the case you wouldn't get the individual sample colours.
OT anyway

This is incorrect. The size and position of the blit rectangle are definitely used. The restriction is that the source and dest sizes must match, i.e. you cannot downsample and rescale in a single operation. You can, however, downsample a subrect.

But you are right, BlitFramebuffer gives no access to individual samples. This restriction exists, in part, because the sample locations are undefined.

pjmlp
08-20-2008, 11:43 AM
Lifecycle of a feature:
But I really want to know why NV and ATI didn't revolte more against the current API and say "enough! we better start over with a clean sheet of paper."


Maybe because nowadays they seem to be more interested on DirectX anyway.

Go to any of their Developer sites and look for the amount of information they provide for DirectX developers and OpenGL ones.

I might be unfair on this statement, but it is how it looks like.

Brolingstanz
08-20-2008, 01:28 PM
I think the deprecation model was the only sustainable way to evolve GL indefinitely. I see it as a way to amortize the pain of a complete rewrite over the course of some unspecified/indeterminate period of time.

Besides, a complete rewrite every few years or so would probably have been the same dog with different fleas.

CrazyButcher
08-20-2008, 01:38 PM
they are interested in their consumers (enduser/ISVs). if the majority of them uses DX for games, that's just fair. If the other majority uses OpenGL for serious applications and are "fine" with the state of tools (buying gdebugger... whatever) that's just the way things run.

many seem to take this on almost religious level, as if they were lead wrong on purpose. (yeah LP yadda). But the situation with SDK, and "tool support" has been the same in OGL for years, and it's a free world, ie on windows at least you have the choice.

as for the other OS, links have been presented before that Apple does have some OpenGL develop infos. And most linux developers are more the hacker, websearch of guys anyway not spoiled by company driven SDKs ;)

PaladinOfKaos
08-20-2008, 03:28 PM
And most linux developers are more the hacker, websearch of guys anyway not spoiled by company driven SDKs ;)

Yeah. We don't need no stinkin' SDKs </mobster voice>

Back in reality... Does the SDK really matter that much at the professional level? Even if a college students started defecting in mass numbers to OGL, do you think the managers at EA and all the other big game companies would decide "Oh, we should switch to OGL"? I certainly don't. At best, an SDK will entice indy developers and hobbyists, who may (And I have no statistics to back this up, it just makes sense to me) be targeting OGL already so they can get Linux and OSX support almost for free - for an indy developer, every possible eyeball counts.

I do think a free (or at least cheap) OpenGL debugger would be totally awesome. I doubt it will happen without a community push though (and between work, and several personal projects, I really don't have time to pick up the mantle of making an OGL debugger). There is a free shader designer (not ShaderDesigner... I can't think of it right now) That's cross-platform. Really what's missing is a debugger.

dor00
08-20-2008, 03:48 PM
[quote=CrazyButcher]

Back in reality... Does the SDK really matter that much at the professional level? Even if a college students started defecting in mass numbers to OGL, do you think the managers at EA and all the other big game companies would decide "Oh, we should switch to OGL"? I certainly don't. At best, an SDK will entice indy developers and hobbyists, who may (And I have no statistics to back this up, it just makes sense to me) be targeting OGL already so they can get Linux and OSX support almost for free - for an indy developer, every possible eyeball counts.


Your logic fail.

Korval
08-20-2008, 03:52 PM
Your logic fail.

Oh no, don't try to rebut or refut his statements or anything. That might lead to an actual dialog or something, and we certainly can't have that.

PaladinOfKaos
08-20-2008, 04:04 PM
Alright, I probably didn't say that very eloquently. Lets' try it again...

Big companies like EA don't want to port their engines to OpenGL. They don't care how nice the tools get, or how many new developers know it. To them, it's not worth the time or effort until OSX and/or Linux have greater market share (OSX is up around 5%-8%, I couldn't find anything that looked like reasonable number for linux. Probably 3%-5%. I have no idea what sort of numbers EA wants before they think it's worthwhile.).

Taking that point, a fancy SDK for OpenGL would attract developers who don't already have a vested interest in DirectX. That would be indy developers and hobbyists, and any development companies that choose to re-write their engine from scratch rather than keep updating it.

Indy developers know full well they can't compete with big-name publishers with massive budgets on windows. So to them targeting OSX and Linux gives a market boost, especially since Linux and OSX games love it when they actually get attention from someone, and tend to watch the 'net for any sign of new games for their beloved platforms.

Since Indy developers already have an incentive to target OpenGL, does the SDK really matter to them? Maybe. Might be the tipping point, might be icing on the cake.

So, the only people that an SDK would be targeting is students and hobbyists. Not much incentive on NVIDIA or ATIs part to make it.


Arguably, a DX SDK is the same. Only really targets students and hobbyists. But there's one difference: Since big-name companies use DX, a nice DX SDK is a way of saying "Look at what the new version can do! Look at the effects you can do on our hardware!". Now the developers might know that those effects aren't hardware-specific, but management will get all dreamy-eyed and strike a marketing deal with NVIDIA or ATI, to optimize for their hardware.


I probably still rambled a bit more than I needed to, but even though it's longer now I think (hope) it's clearer.


EDIT: And that should be "Your logic fails"

bobvodka
08-20-2008, 05:41 PM
The SDK matters BECAUSE of the students and newbies who want to learn. If people stop coming down that channel then the requirement for supporting OpenGL also dries up, the more it dries up the less it matters and so it all falls apart.

For the past year or 2 I've been advising people against learning 3D via OpenGL because the DX SDK and supporting materials is that much better AND free which matters to people. OpenGL on the other hand has a couple of expensive books and a loose collection of webpages (including NeHe *shudders*).

Windows is by far the biggest PC platform, and by extension probably has the largest number of hobby developers developing for it. If they have no reason to learn OpenGL then they might not both with the effort to learn it and thus won't port their games to OS X and Linux and thus the gaming on those platforms decreases as well.

Granted, I'm not saying this will happen tomorrow, but with XNA getting a decent following and the link with that and DX Windows and 360 dev is likely to be a focus for many; game devs are dreamers and the dream is if you can only get 1% of that XB360 install base pie then you are looking at some nice $$$.

So, yes, an SDK matters.. hell, if they didn't why would MS, AMD and NV put the time, effort and money into them for DX?

Korval
08-20-2008, 06:01 PM
I see a lot of people asking, begging for an SDK. But none of you are willing to lift a finger to do anything about it.

No coherent tutorials. No centralized repository of demos and sample programs. Nada.

Unlike improving OpenGL itself, these are not things it would take the ARB to do. In fact, considering the ARB's general ineptitude, it would probably be better for all involved if they didn't do it.

The ARB has shown a clear unwillingness to make anything that a reasonable person would call an SDK. So, are you willing to do it?

I might be willing to join an effort to do so, but I have certain... idiosyncrasies that would make that complicated. Mainly that, because I'm forced to work with the worst code on earth as part of my daily job, I'm very particular about the code I work with for my hobby. For one, I will not program without C++ and Boost.

However, I could help out in terms of documentation and other writing-based tasks for OpenGL.

PaladinOfKaos
08-20-2008, 07:31 PM
Korval: I have no problem with C++ and Boost (that's my preferred combo) for the SDK examples, although that might add unnecessary complexity (and punting people off to the Boost docs defeats the whole purpose of a unified SDK) Since all I got was naysayers in my thread on the subject, I didn't really think there would be any chance of a community-led SDK effort. You're a pretty big name around here, maybe you can pull some more people over to that way of thinking if you get on board with the idea.


I don't want my post earlier to be taken as an argument _against_ having an SDK. I was trying to look at it from the perspective of the IHVs that make the SDKs for their hardware. I've long given up on an ARB-led SDK effort. I just now realized I didn't make that even remotely clear.

Brolingstanz
08-20-2008, 08:37 PM
A fully fledged, vendor funded and maintained SDK is probably asking for too much.

A community funded SDK already exists, only it's not available in one convenient grab.

If a motivated group wants to put the effort into coalescing all the material out there into a single tidy download, the least I can do is download it.

dor00
08-20-2008, 10:23 PM
Why you people don't want to understand the reasons why we need a SDK??

Why are you threading new comers like that?

From the few developers using OpenGL you want to have even less???

Why Microsoft have understand that, and the new comers potential, and they polished the sdk over and over??

SDK comes with and API version. And must be writed by peoples who made the API. They know better that everybody else what the API can do. Community links DON'T match here. Yes, is nice to have them. But the standard come from the SDK.

OpenGL 3 is only on paper right now, probable that how Khronos/ARB understand that an API must be. They don't even care about it in my opinion.

When i look at DX SDK, they given users everything to start using the API.

All of you look at OpenGL 3 very subjective. Everybody got different expectations from OpenGL3, according to his knowledge level. But the reality is .. year 2008, version 3, 2 new papers, and thats all. Have fun to expect games/3d stuff from new comers, like the 3d world is reserved only for some peoples, lets kill the rest.

Mars_999
08-21-2008, 12:19 AM
This SDK idea is great, but first thing is first, you will need to use a platform independent framework, so anyone who wants to help code tutorials can get this empty skeleton framework and dump their code into it and minimize the time it takes to code something and and be somewhat standardized...

or

we can all agree to use GLUT for tutorials and use BOOST for file handling? I am also for C++. Myself I don't use BOOST other than the filesystem portion.

And could OpenGL.org dedicate a section to tutorials??? And allow either moderators or someone to look over the code and post these tutorials on that site?

dor00
08-21-2008, 12:31 AM
If the SDK is not cross platform, we can stop talking here.

GLUT is so old and so deprecated.

As for BOOST, i dont think is a good idea to use it.

A SDK must use minimal external libraries or none. Yes, you can use some for lets say importing tools.

But as i said, the SDK must be made by the API creators. In that way, they can test the API and their ideas also, and they must know better that everyone else how to explain the API.

Korval
08-21-2008, 12:31 AM
Why you people don't want to understand the reasons why we need a SDK??

Nobody's saying that we don't need one. But the simple fact is we aren't going to get one! Not unless someone who is not part of the ARB writes it.


As for BOOST, i dont think is a good idea to use it.

The only reason I brought up Boost (and it's not in ALL CAPS) is because I refuse to code as a hobby without it. And that's not even my most significant idiosyncrasy with regard to coding on my own time.

I don't think it's a good idea to force users to make a huge download of Boost plus the installation time to use the GL SDK. But I won't contribute to the code without boost::shared_ptr, boost::bind, etc. I simply have better things to do with my personal time than code in a style that I don't like. Which was my point: I won't be contributing code to an SDK for that reason.


the SDK must be made by the API creators. In that way, they can test the API and their ideas also, and they must know better that everyone else how to explain the API.

Well, they're not going to make one, so it's best to let go of that dream and move on. If you feel OpenGL needs an SDK to survive, the ARB isn't going to get one done, so it's obvious that someone else must.

dor00
08-21-2008, 12:49 AM
[quote]
Well, they're not going to make one, so it's best to let go of that dream and move on. If you feel OpenGL needs an SDK to survive, the ARB isn't going to get one done, so it's obvious that someone else must.

I try to understand what is/(was) the ARB difficulty to make a better OpenGL.

Why is updated so slow? There are so many "WHY"...

dor00
08-21-2008, 12:53 AM
Isnt "FUN" how Khronos group is working?

Participation Type Cost (USD)
Promoter Membership $25,000 annually
Contributor Membership $7,500 annually
Academic Membership $1,000 annually

http://www.khronos.org/members/join/

So, basically, if you have 25k$ you can be Promoter.

"Promoter Members act as the "Board of Directors" to set the direction of the Group, with final specification ratification voting rights and the right to designate a Director to the Board. There are limited openings at this level of membership and applications must be approved by the Khronos Board of Directors. Companies are generally requested to participate for 6-12 months at the contributor level first."

Don't Disturb
08-21-2008, 03:10 AM
This is incorrect. The size and position of the blit rectangle are definitely used.In theory you're right, in practice (NVidia drivers 175.16 and 177.89) you're wrong. Try it. Then go and shout at your driver writers ;)

Rick Yorgason
08-21-2008, 05:08 AM
we can all agree to use GLUT for tutorials and use BOOST for file handling? I am also for C++. Myself I don't use BOOST other than the filesystem portion.
GLUT (or FreeGLUT) isn't too bad, since it was made specifically for official OpenGL docs. Although it would be nice if the SDK also had some OS-specific articles on how to set up your window, since that's how OpenGL is actually used in the wild. OS-specific ways of doing input and image file parsing and such aren't really necessary, since none of that is tightly bound to OpenGL.

Boost is much more intrusive, although an OpenGL SDK that uses Boost is better than nothing. After all, Boost is a damn good set of libraries. C++ TR1 also supports much of what you may be using Boost for, so if you're using one of the latest versions of GCC or Visual Studio 2008 [Express] SP1, then you may not even need to download Boost. (Unfortunately, this doesn't include the file management stuff -- that's coming on TR2 -- but I'm not sure that would really be useful in an OpenGL API anyway.)

dor00
08-21-2008, 05:54 AM
I dont agree about Boost.

You can use it for whatever you want, but not for SDK.

PaladinOfKaos
08-21-2008, 06:55 AM
We can argue about the usefulness of Boost until the cows come home. I personally think boost::filesystem::path makes its inclusion worthwhile in every project. And we can always copy the boost headers we use.

Before we even think about finalizing language and toolset, we should finalize exactly what we're writing. We can do this on the forum, or on IRC, MSN, or some other more real-time chat. But we need a list of needed tutorials and examples, so that if someone wants to contribute it's easy. Once we have that, we can argue about the boilerplate code for a while, and then actually get started.

bobvodka
08-21-2008, 06:58 AM
adn thus why an SDK won't happen; within a few posts of the idea being floated (again) people are already split along code lines, never mind about presentation style and content.

This is why it took a team at gd.net over 2 years to rewrite the first couple of tutorials; they got bogged down in code.

For my part, well I wrote two articles on using the FBO extension over on gd.net last year. I had a 3rd planned but as I'm now only lurking around opengl.org for the post-GL3.0 aftermath I doubt this will ever happen. Also, and this is another issue, I have a day job which takes up most of my time, when I get in all I want to do during the week is veg out playing games, at best I'll poke some C# at the weekend for a while; I dare say I'm not the only one in this situation.

PaladinOfKaos
08-21-2008, 07:47 AM
I prefer to think that we can sort it out. It's just a matter of having someone to give direction, and a solid timetable for things like language decision. And if people keep arguing about it, I'll write the boilerplate myself and anyone who really cares about getting an SDK made will hopefully understand and still help.

@bobvodka: I think your first two turorials were totally awesome. The best info on FBOs I managed to find.

pudman
08-21-2008, 09:44 AM
It's just a matter of having someone to give direction, and a solid timetable for things like language decision.

You forgot the need for people that will follow the direction and have the time and willingness to do so.

The benefit of having the ARB/IHVs do it is that someone would get *paid* to have the time and willingness.

Michael Gold
08-21-2008, 11:43 AM
This is incorrect. The size and position of the blit rectangle are definitely used.In theory you're right, in practice (NVidia drivers 175.16 and 177.89) you're wrong. Try it. Then go and shout at your driver writers ;)
Confirmed that this bug exists in those drivers and is fixed for a future driver update.

PaladinOfKaos
08-21-2008, 11:46 AM
@Michael Gold: Any chance that fix will be in the first *nix releases with GL3, or is that classified information?

Mars_999
08-21-2008, 01:39 PM
Ah, now I see, the correlation between BobVodka and Phantom... ;) Nice to see you still visit after switching to DX10.

bobvodka
08-21-2008, 02:40 PM
As I said, still hanging around for the post-GL3.0 aftermath.. also I need something to read while at work :D

HenriH
08-21-2008, 03:46 PM
OpenGL specification is written for the C language and I think C language should be used for the SDK tutorials too. C is also quite nice while C++ can be complicated for the new comers. GLUT (as in FreeGLUT which is maintained) is a good choice for the portability framework, another good one is SDL.

Boost is not a good idea IMO. You should keep the tutorials simple, clean and right to the point.

These are my thoughts...

Rick Yorgason
08-22-2008, 01:16 AM
It's true that, ideally, an SDK should impose the fewest prerequisites possible, which precludes Boost and even C++. But like I said, if that's the only way this project can get off the ground, then it's unquestionably better than nothing.

If the project manages to get off the ground, then it would be worth considering reediting the articles to remove the C++/Boost stuff.

If somebody can make a really good OpenGL SDK and post all the articles online, then they could probably make enough money off of advertising to justify spending a day a week on improving it.

Edit: I see there's been significant discussion about this over here (http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=242950&fpart=1). Pretend I never responded to the splinter discussion in this thread >:)

PaladinOfKaos
08-22-2008, 07:51 AM
What was that? Did someone respond to the splinter discussion? I thought someone did, but now I'm not sure :P

For the record, if anyone else feels the urge to comment:
If it's constructive, please post it in the thread Rick just linked to. If it's not constructive, please refrain from posting it. We're trying to have a serious discussion about this SDK, not a flamewar If it's to complain about how horrible the ARB "SDK" (and I use those quotes with a certain malice), feel free to send a mass email to the ARB members, with special attention to the Ecosystem guys.

PkK
08-22-2008, 11:47 AM
Alright, I probably didn't say that very eloquently. Lets' try it again...

Big companies like EA don't want to port their engines to OpenGL.

Activision-Blizzard is bigger than EA. Activision-Blizzard is on the ARB (Khronos OpenGL contributor).

Philipp

Santi
08-22-2008, 03:47 PM
I've a question... when available, if I install a compliant OpenGL 3 driver on a computer, will OpenGL 2 app's still running??

Other manner to explain: Can be OpenGL 2 and OpenGL 3 installed in the same computer?

DirectX allow to run oldies... those apps using older DirectX libs and classes available from newer releases.

This is the bigger problem I see in all this mess.

Korval
08-22-2008, 04:35 PM
when available, if I install a compliant OpenGL 3 driver on a computer, will OpenGL 2 app's still running??

Yes.

In order to even get a GL 3.0-compliant rendering context, you must call a special extension function (ARB_context_create). Otherwise, the context you get will be 2.1 (or lower).

Michael Gold
08-22-2008, 08:42 PM
Other manner to explain: Can be OpenGL 2 and OpenGL 3 installed in the same computer?

This is the bigger problem I see in all this mess.

Not only can 2 and 3 co-exist on the same computer... they can co-exist within a single application! If/when a future version breaks backward compatibility, legacy code can continue to run, and you can incrementally add new code which takes advantage of the new version. Switch between rendering pipelines with MakeCurrent. (This was the backward compatibility plan originally developed for Longs Peak, btw... and was approved by the same evil CAD vendors who are incorrectly credited with the ARB's change of direction.)

If this is your biggest concern with 3.0, then I guess its not so bad after all.

ScottManDeath
08-22-2008, 09:20 PM
So if not the CAD guys, who are then the culprit(s)?

Michael Gold
08-22-2008, 09:52 PM
So if not the CAD guys, who are then the culprit(s)?
Why, the ARB of course. And that's all you're getting from me, unless you want to write me a really, really big check. ;)

But seriously, Barthold already explained what happened here (http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Board=12&Number=243307&Searchpage=1&Main=47703&Words=barthold&topic=0&Search=true#Post243307).

Leadwerks
08-23-2008, 12:09 PM
Okay, but it still boils down to whether or not AMD will produce working OpenGL3 drivers. Until that happens, the situation is exactly the same as before; only NVidia fully supports OpenGL. If AMD can be counted on to do this, applications can start to be written now using NVidia's drivers, and they can just say "Requires OpenGL3 hardware/drivers" until AMD gets it together.

I have noticed ATI seems to be putting more effort into their OpenGL drivers since shortly before the AMD aquisition. They have been good at processing bug reports, in my experience, but there is still a long way to go for their drivers to be OpenGL3-compliant.

Rob Barris
08-23-2008, 02:56 PM
http://www.khronos.org/developers/librar...BOF%20Aug08.pdf (http://www.khronos.org/developers/library/2008_siggraph_bof_opengl/Vendor%20Announcements%20SIGGRAPH%20BOF%20Aug08.pd f)

AMD has committed to producing OpenGL 3.0 drivers including the GL3 extensions such as geometry shader and instanced rendering.

knackered
08-23-2008, 03:58 PM
Have they made any quality commitments?
I'm not being facetious, I really wonder if they're going to be serious about supporting GL3. Is this profile stuff going to mean better ATI drivers? Have they said that?

Jean-Francois Roy
08-23-2008, 04:51 PM
*Shameless plug*

I have posted a list of forward-compatible core OpenGL 3.0 entry points on my blog at http://www.devklog.net/2008/08/23/forward-compatible-opengl-3-entry-points/. I will be continuing to improve that list by adding function sub-groups and additional deprecation notices related to deprecated constants.

Rob Barris
08-23-2008, 05:12 PM
Have they made any quality commitments?
I'm not being facetious, I really wonder if they're going to be serious about supporting GL3. Is this profile stuff going to mean better ATI drivers? Have they said that?

It seems to me that these types of questions can't really be answered until you have a release in your hands to evaluate, since they are mostly subjective. I'm looking forward to seeing what AMD comes up with as well. Since they mentioned a series of beta releases between now and their Q1 2009 goal for completion of GL3, there will likely be a few opportunities to generate feedback and test cases as we go.

Re profiles and subsetting, no profiles are yet defined beyond the base profile. Note that vendors cannot make up profiles, only the Khronos working group can. So no, I don't see profiles having any effect on the first round of 3.0 implementations.

PaladinOfKaos
08-23-2008, 05:15 PM
Thanks, that'll be quite useful to a lot of people =)

EDIT: This is target at the non-deprecated GL3 entrypoint list. Stupid cross-posting...

Leadwerks
08-23-2008, 05:41 PM
Well then, my seething rage is starting to lessen, a little bit. My personal experience dealing with AMD reinforces what this slide says. Their commitment to OpenGL now appears to be much better than they committment to OpenGL one year ago, at least.

Apparently, Intel is "excited" about OpenGL 3.0, but that doesn't really mean anything. No matter, their built-in chips wouldn't be able to run our engine anyways. I think Intel's "future platforms" probably means their Larabee chip.

If in q2 2009 I can revamp our renderer and get rid of most of our crazy fallbacks and hardware limitations, that would be a pretty good situation.

mbien
08-23-2008, 06:08 PM
is there a machine readable list of all deprecated functions and enums available? (like a simple text file or header files without the deprecated functionality?)

dor00
08-26-2008, 12:33 AM
Overall, after analyzing everything about OpenGL 3, all i can say is:

Thanks to peoples who make it possible. Yes, maybe is not perfect, or what some of us expected, but is here. I think ARB members did a great job. In the begining i was a bit angry/confused, but now i realize that having OpenGL 3 is better that nothing.

I want to apologize if i was a bit aggressive.

Eddy Luten
08-26-2008, 06:21 AM
Anybody read/link this yet? => http://softwarecommunity.intel.com/UserFiles/en-us/File/larrabee_manycore.pdf

Leadwerks
08-26-2008, 12:14 PM
Yeah, I am learning about the Larabee stuff. There's supposed to be a dev kit out soon from Intel. I'm very interested in this. My gut says it won't work out, but I hope it does. I want to see DirectX and OpenGL get wiped out.

Y-tension
08-26-2008, 12:46 PM
I will add my 2 cents as well. Nothing that has not been voiced already, just had to reinforce the message myself as well. OpenGL3 is an utter failure. Plain and simple. There is no excuse. They could have opted to better support GL2.1 with extensions(All DX10 extensions are only available on NVIDIA chipsets). It would be the same thing, and everyone would be happy. Really, GL 3.0 is nothing new compared to what we already had. So, please ARB, it's not too late, apologize to people, start communicating again and give what you promised. All this 'fanfare for nothing' at the main site is only making people more disappointed at how underrated their (the people's, not the ARB's) intelligence is. I am but an amateur game programmer but it is plain simple that anyone serious about that will switch to Direct3D sooner rather than later. so, hurry ARB...you may still have some eggs unbroken in the end.

Oh and one more thing that really unnerved me. I really appreciate, given the current state of the API that people still work on it. If people could refrain from calling efforts like GL_EXT_direct_state_access a joke it would make things easier for people who are making the effort. I,for sure, despise it when my work is discarded as useless just like that...it just extinguishes any incentive.

pudman
08-26-2008, 01:42 PM
OpenGL3 is an utter failure.

If people could refrain from calling efforts like GL_EXT_direct_state_access a joke it would make things easier for people who are making the effort.

It's ok to criticize GL3.0 but not GL_EXT_direct_state_access? Weird.

Korval
08-26-2008, 01:47 PM
It's ok to criticize GL3.0 but not GL_EXT_direct_state_access? Weird.

That could be because DSA goes farther than GL 3.0 in providing what Longs Peak promised.

Rob Barris
08-26-2008, 03:36 PM
Longs Peak aimed to solve the "bind to edit" issue in OpenGL, by eliminating "edit" - and providing only immutable objects. If you can't edit objects, yep, no worries about bind to edit any more.

DSA aims to solve "bind to edit" by removing the need to bind an object to the state vector (context) before altering it.

knackered
08-26-2008, 03:39 PM
it's nvidia's last gasp attempt at forcing an object model on OpenGL. Like, ok you won't let us have LP so we're going to copy and paste some kind of object model from the existing API. It's appreciated, but really highlighted what a joke 3.0 itself became. I feel sorry for nvidia.

Brolingstanz
08-26-2008, 03:40 PM
I am but an amateur game programmer but it is plain simple that anyone serious about that will switch to Direct3D sooner rather than later. so, hurry ARB...you may still have some eggs unbroken in the end.

Seems pretty clear to me that anyone serious, or at least a bit curious, would have / should have / could have tested the API greenness on the other side some time ago. For the rest of the graphics development community, I'd say things turned out pretty darn good, as GL3 turned out pretty much the way they, the majority, wanted it to - to the possibly chagrin of some, the minority, game coders.

Clearly the big question right now is when do we get stable drivers from the "big 3", and to a lesser extent how this deprecation/profile model will play out in practice. This first question is the only interesting one to me, since whether or not vendors want to drop functionality is entirely up to them... no skin off my nose either way.

bobvodka
08-26-2008, 04:26 PM
I wonder who this 'majority' are, because from my understanding this wasn't a mass 'no to 3.0' vote but a case of 'we aren't getting it done on time, lets release 2.2, oh hi mr Khronos PR, what's that? call it 3.0? ok...'.

Michael Gold
08-26-2008, 04:37 PM
Longs Peak aimed to solve the "bind to edit" issue in OpenGL, by eliminating "edit" - and providing only immutable objects. If you can't edit objects, yep, no worries about bind to edit any more.

Not quite true Rob. State could not be edited - but data could. For example if you want to read or write buffer data, GL2 requires that you bind it first. LP did not.

Brolingstanz
08-26-2008, 04:41 PM
Perhaps they did yield to an irresistible sense of inevitability ;-)

Korval
08-26-2008, 04:46 PM
Longs Peak aimed to solve the "bind to edit" issue in OpenGL, by eliminating "edit" - and providing only immutable objects. If you can't edit objects, yep, no worries about bind to edit any more.

DSA aims to solve "bind to edit" by removing the need to bind an object to the state vector (context) before altering it.

Right. And GL 3.0 takes no steps towards the LP method in any way compared to 2.1. So like I said, DSA goes farther than 3.0 in achieving what LP was trying to do.

Rob Barris
08-26-2008, 06:29 PM
Just for clarity, we set a constraint for 3.0 in that virtually all functionality being integrated into core for the Aug 08 release already had to exist as a well defined extension or implementation.

At the time that course was set, there was no DSA extension available. (Now it is, and so now we can have a discussion about whether it would make sense to use it, or a 3.x-savvy derivative of DSA, in the next release.)

This isn't to say that the constraint for inclusion will stay as high as it is indefinitely, but it was in effect for the 3.0 release in order to meet schedule commitments.

V-man
08-26-2008, 06:37 PM
and to a lesser extent how this deprecation/profile model will play out in practice. This first question is the only interesting one to me, since whether or not vendors want to drop functionality is entirely up to them... no skin off my nose either way.

It's up to the vendors to drop features? Nah, I don't think so. Khronos should decide. There are a bunch of things marked depricated so I guess in the future if you create a 3.1 context, the depricated stuff won't work.

Brolingstanz
08-26-2008, 06:56 PM
Sure... Khronos as a body determines the deprecation policy, but it's ultimately up to the vendors to remove the code from their drivers... and as I understand it they can continue to expose deprecated/removed API by way of extensions.

I don't think the vendors will need a fire lit under them before they hot-foot it on down to the deprecation/removal office with a donation ;-) But they do have some options...

CatDog
08-27-2008, 01:45 AM
For the rest of the graphics development community, I'd say things turned out pretty darn good, as GL3 turned out pretty much the way they, the majority, wanted it to - to the possibly chagrin of some, the minority, game coders.
Agreed.

I've been lying on a beach for two weeks now, with no internet at all. What a good decision! In the meantime, around 150 pages of (mostly) rant have been produced. Wow!

No object model? It looked like a nice idea, but obviously it didn't work out. Shi t happens. I like the fine tuned buffer mapping. And I like the deprecation model. The latter is the key for evolution of the API, not just bloating like before. What I like the most is, that it opens up the possibility to shift smoothly from 2.1 to 3.0 (and up). Backwards compatibility was always one of the main features of GL, especially compared to DX.

The only thing that really bugs me is, why on earth hasn't this been done a year ago? I hope, it's not too late. Now everything solely depends on the IHVs, the quality of their drivers and the will to push things forward. Things are prepared.

CatDog

Y-tension
08-27-2008, 04:22 AM
It's ok to criticize GL3.0 but not GL_EXT_direct_state_access? Weird.

That could be because DSA goes farther than GL 3.0 in providing what Longs Peak promised.

Thank you very much...

The only thing that bothers me is that DSA still uses object identifiers instead of the object pointers(if I remember this correctly) that were supposed to enter GL3 initially. I still don't see how the deprecation model will manage to introduce such drastic changes by itself..well, we'll see how this turns out.

CrazyButcher
08-27-2008, 06:06 AM
Y-tension, the object identifiers are sorta pointers now, since you must use glGen to get them, you cannot make your own, like in past (that is part of GL 3.0 already)

Y-tension
08-27-2008, 07:59 AM
Yep,I know, it's just that having actual pointers was supposed to be faster...OK,OK I quit whining..

pudman
08-27-2008, 08:07 AM
No object model? It looked like a nice idea, but obviously it didn't work out.

Yes, but one should ask WHY it didn't "work out". For two years the ARB cranked on LP. Then they decided, we're not moving GL forward AT ALL let's restart and slap extensions into the core and add a deprecation model. So in 3 years (assuming non-beta drivers from nvidia AND AMD by Q1 '09) GL as an API has barely progressed in terms of usability and efficiency.

The issues are that 3.0 did nothing for API bloat, did not simplify driver writing, and delivered none of it's promises.

Also, 3.0 is not strictly backwards-compatible. It requires DX10 level hardware. LP did not have this restriction.

I'd also like to emphasize that the deprecation model does nothing for API bloat in 3.0. There's a *possibility* that future versions will trim the deprecated features. 3.0 leaves it all in there so the end effect is a LARGER API given that extensions were added to it.

If someone was new to OpenGL as of July and got familiar with 2.1, then 3.0 would seem like an improvement. But with knowledge of the historical actions taken (and especially *not* taken) by the ARB as well as the promises, timescale of those promises, and the incredible Silence, you'd be pissed too.

No one has yet commented on it here so I'll just point out that the SIGGRAPH BoF did not address the Great Silence. (No surprise, guess it will become the Great Mystery.)

CatDog
08-27-2008, 10:16 AM
The issues are that 3.0 did nothing for API bloat, did not simplify driver writing, and delivered none of it's promises.
If you judge "API bloat" by the size of the header file, you're right. But that is of no interest I think. What really matters is that you now can and should avoid deprecated features. These are only there to provide support for older software. So you've got it there, in black and white: DON'T USE THIS ANYMORE. That's what I wanted for ages.

Does this simplify driver writing? I don't know... but what I know is, that nVidia already came up with their first beta driver! This seems due to the fact, that they could take over their codebase in complete. Do you think that would have happened with Longs Peak also?

And promises... what promises? They tried to build a new fancy API and failed. Reasons known (http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Main=47703&Number=243307#Post243307). These are technical reasons! Maybe they are much more disappointed themselves then you think.


Also, 3.0 is not strictly backwards-compatible. It requires DX10 level hardware.
Surprise!? It uses DX10 features, so it needs DX10 hardware. An application written against GL2.1 won't run on GL1.5 hardware. But I can compile my old GL1.5 application using the GL3.0 API. So where is the difference? Maybe I'm missing something?

GL3 exposes DX10 features (in particular on XP!). That's good, isn't it?

Oh yes, I am pissed by the Great Silence. But this shouldn't influence the appraisal of what we've been offered now.

CatDog

bobvodka
08-27-2008, 12:20 PM
Considering what as on offer and what we got; yes it should influence the appraisal. It's like a car salesman telling you he'll exchange your 4door car for a supercar, going away for 10 months and not returning phone calls and finally turning up with your old car tuned up a bit with a polish. Sure, it's nice to have a car back and the tune up is all well and good but where is the sports car?

As for NV releasing drivers, there is a good reason why; 99.9% of what is 'new' in OpenGL3.0 they had in extensions for ages now. So they didn't have to do much of anything.

AMD/ATI on the other hand have had poor GL support and have NOT had those function in their code base for eons; afaik they don't have GL3.0 drivers, even in beta, yet. Why am I not shocked?

Korval
08-27-2008, 12:31 PM
And promises... what promises? They tried to build a new fancy API and failed.

Yes, and in failing to do what they promised, in every newsletter published thus far, they broke their promises.

At no time did the ARB ever suggest that LP was only a "possibility." They led the community to believe that it was a "come hell or high-water" certainty. Even if it didn't assume the form that LP was shown to be, the ARB never once suggested that they were simply dump the idea of a new API and go back to square 1.


GL3 exposes DX10 features (in particular on XP!). That's good, isn't it?

No, it is not. OpenGL has had that for years through extensions. That an extension is promoted to core is functionally meaningless; the functionality was already provided.

Even without GL 3.0, ATi would have provided implementations for those functions. And that is because they wouldn't want to hear millions of Starcraft II players asking why their newly purchased game pops up with a message saying that, "Sorry, your graphics card doesn't support this game."


afaik they don't have GL3.0 drivers, even in beta, yet. Why am I not shocked?

Starcraft II isn't out yet. They'll have GL 3.0 drivers for its release. And that's the only reason for them having GL 3.0 drivers of any kind.

Gauss
08-27-2008, 01:28 PM
In my opinion OpenGl 3.0 is by far not the big failure as many people here seem to believe.

A lot of functionaliy that up to now was only available by NVIDIAs G80 extensions are finally part of the core. It seems to me that many people here oversee the large advantage this has.
This means that all this functionality will show up definitely in AMDs drivers soon and also in Intels upcoming hardware that is able to support them. Because of marketing reasons alone they will have no choice but to support this functionality in OpenGL in order to be able to put this "Supports OpenGl 3.0" phrase on their box.
If extensions are only available by one vendor this is nice for prototyping purposes or for hobby programmer but only if these extensions are supported by at least AMD and NVIDIA they become valuable for a product.

A lot of functionality is depreciated and they have specified a way to create a context where i will be not able to use this depreciated stuff. If i really want to one can ignore all those depreciated functionality and can work with an API that is pretty lean and mean.

I would have also preferred if they had come up with an OpenGL 3.0 that had satisfied all the promises they made, but if i had to choose between what we have received now and a version that satisfies all promises but is available only in 3 years in the future i definitely think the ARB made the right decision.

However, there is one point where i believe the ARB made a big mistake. I think there should be - beside the full OpenGL 3.0 spec - also a forward looking OpenGL 3.0 spec around without all that depreciated stuff (a spec that does not contain that depreciated stuff not just a spec what the depreciated functionality is written in another color). If the ARB had presented such a spec i am pretty sure that many people here would be more happy with what the ARB has presented.

CatDog
08-27-2008, 01:30 PM
As for NV releasing drivers, there is a good reason why; 99.9% of what is 'new' in OpenGL3.0 they had in extensions for ages now. So they didn't have to do much of anything.
Of course, that's what I said. But what is so bad about it? Sounds like you blame them for being clever and doing their homework.


No, it is not. OpenGL has had that for years through extensions. That an extension is promoted to core is functionally meaningless; the functionality was already provided.
What kind of logic is that? Everything an API is about is to define *the way* functionality is provided. Therefore promoting extensions to core is one of the most important actions, since only then this functionality becomes part of the specification.

Now there is also an elegant method for removing deprecated functionality. I hope they make use of it.


Starcraft II isn't out yet. They'll have GL 3.0 drivers for its release. And that's the only reason for them having GL 3.0 drivers of any kind.
Maybe you're right. But do you really think ATIs attitude towards OpenGL would be more constructive, if they had to implement a new API *from scratch*, namely Longs Peak?

*edit*

not just a spec what the depreciated functionality is written in another color
I only saw the one that emphasized the *new* stuff! There's a document with the *deprecated* stuff in another color? Where?

CatDog

pudman
08-27-2008, 01:36 PM
What really matters is that you now can and should avoid deprecated features. These are only there to provide support for older software. So you've got it there, in black and white: DON'T USE THIS ANYMORE. That's what I wanted for ages.

I think a new API that didn't include those features would have given you exactly the same thing with the added benefit of not even including those features you shouldn't use.

I personally have been avoiding most of those deprecated features anyway. I didn't need it explicitly spelled out by the ARB to know that the FFP is going to go away someday.

I like bobvodka's car analogy. I think many of us could come up with some fun ones:

It's like the plumbing in my house. It's been there since the house was built and the only toilets I can install are MegaForce Flushers because they are the only company that makes pipe adapters for my old plumbing system. I would really love get rid of all that old piping (it's rusty, leaky, takes up a lot of space, etc) and be able to install those CrossFlusher X2 toilets I've been reading all about. Luckily for a while now my plumber has been telling me all about his plan for redoing my piping. It's awesome! All copper, no more adapter issues, and I don't even have to upgrade my city water connector!

Guess what my plumber did a few weeks ago? He came and installed more of the crap piping in my house, bypassing some of the old stuff. The advantage is that I no longer need MegaFlusher adapters. Oh, and there's a bunch of piping that's no longer going to leak anymore (because it's not used). My plumber said he'll remove that old pipe someday.

Still can't use CrossFlusher X2's because they haven't come out with their model that connects to this new system (at least they won't need an adapter).

But hey, at least I can still take a dump.

pudman
08-27-2008, 01:44 PM
What kind of logic is that? Everything an API is about is to define *the way* functionality is provided.

I believe you made the point of: One can use DX10 features in XP! To which Korval replied: Um, you already could do that.

Until AMD puts out a 3.0 driver the situation remains the same on XP. 3.0 hasn't changed that. Also, AMD *could* have released extensions to have done the same thing. 2.1 definitely didn't prevent that.

Everyone who "oohs" and "aahs" over the "new" features in 3.0 comes across as ignorant of what 2.1 (with extensions) provided. Sure, it's now "core", but from a coder's perspective that actually makes very little difference.


But do you really think ATIs attitude towards OpenGL would be more constructive, if they had to implement a new API *from scratch*, namely Longs Peak?

I think the general perception on these forums is that AMD is probably having to write a GL driver from scratch anyway. At least, if it was relatively easy to incorporate the 3.0 changes into their driver then why did they never release DX10 feature extensions?

Korval
08-27-2008, 01:44 PM
Of course, that's what I said. But what is so bad about it? Sounds like you blame them for being clever and doing their homework.

He's undercutting your argument that nVidia having a beta GL 3.0 is meaningful in some way. Because nVidia's 2.1 implementation was 95% of 3.0 already, the presence of beta GL 3.0 drivers is not meaningful in terms of how GL 3.0 will impact the world.

In short, if you were using an nVidia implementation, you already could do everything that GL 3.0 allows. So you didn't get any improvement except for the meaningless "promotion to core."


Therefore promoting extensions to core is one of the most important actions, since only then this functionality becomes part of the specification.

The purpose of extensions is to extend the functionality of an implementation. If an extension is widely supported, then it is de-facto core, even if it isn't core. The S3TC extension isn't core, but everybody supports it and everyone pretty much assumes that it is available.

In short, if ATi had supported the extensions promoted to GL 3.0 core, then 3.0 would have been almost entirely without purpose. And until they actually support 3.0, the release of the spec is entirely without meaning.


But do you really think ATIs attitude towards OpenGL would be more constructive, if they had to implement a new API *from scratch*, namely Longs Peak?

Well, it wouldn't have been less constructive. And it would have been more constructive for anyone wanting to write a good GL implementation (hello, Intel). Remember that? One of the purposes behind LP was to make the job of IHVs easier. GL 3.0 does nothing towards that goal.

Leadwerks
08-27-2008, 01:46 PM
You guys that are complaining a lot (like myself), just take an afternoon and play with the DX10 SDK examples. They're pretty good, and it does feel good to be a part of the "mainstream" or whatever you want to call it.

CatDog
08-27-2008, 02:09 PM
One of the purposes behind LP was to make the job of IHVs easier.
Obviously, GL3 made the job for at least one IHV easy. Maybe it is in fact nVidias new own graphics API. The one knackered is watching out for. :)

*edit*

Until AMD puts out a 3.0 driver the situation remains the same on XP. 3.0 hasn't changed that. Also, AMD *could* have released extensions to have done the same thing. 2.1 definitely didn't prevent that.
Until now they could. Now they *must*. That's exactly one of the things GL3 has changed.

Peace, CatDog

tsuraan
08-27-2008, 02:44 PM
You guys that are complaining a lot (like myself), just take an afternoon and play with the DX10 SDK examples. They're pretty good, and it does feel good to be a part of the "mainstream" or whatever you want to call it.

I'm curious, is there a way to view the DX SDKs (articles, tutorials, examples) online, without installing the DX10 SDK and Visual Studio? I've been curious for a while on what makes D3D9 (and 10) so much cleaner and nicer to use then OpenGL, but even the DX documentation seems to be packaged in a windows only installer, from what I've been able to find.

Korval
08-27-2008, 03:02 PM
Obviously, GL3 made the job for at least one IHV easy.

Um, so what? And that's precisely one, not "at least one."


Until now they could. Now they *must*. That's exactly one of the things GL3 has changed.

No, they *must* not. The only "must" they have in their future is Starcraft II. Which would have been coming with or without GL 3.0. So either way, ATi would have to support something more.

And so when GL 3.1 comes around, what kind of support can we expect from ATi? Exactly and only what it takes to support existing games. Anything more than that, and you get nothing from them. So if Blizzard were to decide not to do anything for GL 3.1 support in SC2, ATi will also not support GL 3.1.

obirsoy
08-27-2008, 03:18 PM
You can start reading the tutorials here, http://msdn.microsoft.com/en-us/library/bb172485.aspx . But you have to download the SDK if you want to get the source code.
On my Win XP, I was able to extract "DXSDK_Jun08.exe" using 7z's file manager without installing the SDK. You might be able to the same, whatever your operating system is.

knackered
08-27-2008, 04:07 PM
I've been curious for a while on what makes D3D9 (and 10) so much cleaner and nicer to use then OpenGL
it's not nicer, but it is cleaner and the drivers are solid for it. Personally I think d3d10 is pig ugly to use, but there's exactly one way to go about each task - which makes it better than opengl for both the developer and the driver writers (obviously...there's virtually no bugs in any implementations, unlike this 10 tonne charm bracelet we're all dragging around with us).

tsuraan
08-27-2008, 04:20 PM
You can start reading the tutorials here, http://msdn.microsoft.com/en-us/library/bb172485.aspx . But you have to download the SDK if you want to get the source code.
On my Win XP, I was able to extract "DXSDK_Jun08.exe" using 7z's file manager without installing the SDK. You might be able to the same, whatever your operating system is.


Wow, there are a crap-ton of files in the directx_aug2008_redist.exe file. I'll be up all night sorting through this stuff, but I haven't had any luck actually finding anything that's not a dll, inf, or xml file. Maybe the xml files are html... Anyhow, for any other *nix users out there, the p7zip package has a 7z program that can be used to extract MS CAB files (and the exe itself).

Gauss
08-27-2008, 04:23 PM
No, they *must* not. The only "must" they have in their future is Starcraft II. Which would have been coming with or without GL 3.0. So either way, ATi would have to support something more.

And so when GL 3.1 comes around, what kind of support can we expect from ATi? Exactly and only what it takes to support existing games. Anything more than that, and you get nothing from them. So if Blizzard were to decide not to do anything for GL 3.1 support in SC2, ATi will also not support GL 3.1.

This might be true for the Gamer cards, but for their workstation brand i am pretty sure they always have to support the latest OpenGL core implementation because of marketing reasons. Of course this does not guaranty that this implementation will be error free or performant, but at least it will guaranty that an implementation will be part of the driver so this feature can be checked "is supported" for their sales persons.

pudman
08-27-2008, 04:37 PM
The only "must" they have in their future is Starcraft II.

When did Blizzard say SC2 would be GL3.0? I would think that it would be too risky to develop for an API that won't even have any official drivers out from anyone for months, much less drivers from all major players.

Gauss
08-27-2008, 04:42 PM
I only saw the one that emphasized the *new* stuff! There's a document with the *deprecated* stuff in another color? Where?

I was referring to the document that was pointed out by Barthold.
Your are right, it is emphasizing the new stuff and not the old stuff, so there isn't even a version around that i am aware of where the deprecated functionality is color coded. Sorry if i was not precise here ...:p

Timothy Farrar
08-27-2008, 04:46 PM
GL3 exposes DX10 features (in particular on XP!). That's good, isn't it?

No, it is not. OpenGL has had that for years through extensions. That an extension is promoted to core is functionally meaningless; the functionality was already provided.

I'd suggest re-reading the GL3 spec and prior extensions. GL3 does add some very important updates to DX10 (and some pre-DX10) functionality. For example, FBO support for binding framebuffers of different formats and bit depths. Or GL_ARB_instanced_arrays for frequency stream divider functionality. Or MapBufferRange() which was previously Mac only. Etc.

Korval
08-27-2008, 04:50 PM
When did Blizzard say SC2 would be GL3.0?

When ATi announced GL 3.0 support.


I would think that it would be too risky to develop for an API that won't even have any official drivers out from anyone for months, much less drivers from all major players.

1: There are only two major players, and one of them (ATi) released a press release about a partnership with Blizzard to make sure Blizzard games run well on ATi hardware.

2: SC2 is not coming out this year. If it were, it'd be in beta by now.

3: "Supporting" GL 3.0 requires very little actual effort from ISVs. It's mostly just looking for extensions, and using ARB_create_context if it is available.

4: It's Blizzard. They can require that IHVs have GL 3.0 drivers by telling them that this is what they will be shipping SC2 with.


For example, FBO support for binding framebuffers of different formats and bit depths. Or GL_ARB_instanced_arrays for frequency stream divider functionality. Or MapBufferRange() which was previously Mac only. Etc.

1: That should have been in FBO since day 1 (or at least not making it an incompleteness, so implementations could do it themselves). It was literally the first thing people said after reading the EXT_FBO spec. So the ARB was just being stupid then.

2: Not in GL 3.0. It's an extension.

3: That kind of mapping was supposed to be in Longs Peak. There was an entire article devoted to it in one of the newsletters. So I'm certainly not going to praise the ARB for doing what they said they were going to, especially since they failed to do virtually everything else about LP that they said they were going to.