PDA

View Full Version : Rage - what a mess



H. Guijt
10-08-2011, 02:55 AM
So Rage, the new flagship title by ID, was released and found to be unbelievably buggy when played on PC. Both AMD and NVidia are scrambling to update their drivers so that the game becomes at least nominally playable.

While I have not yet seen any analysis of what went wrong, the fact that it can (and apparently must) be fixed on the driver level is surely bad news. Here is the first (and only?) major OpenGL title on the market, and it turns out to be hideously badly supported, with drivers that are apparently not tested at all. Clearly not good for AMD or NVidia (who promised OpenGL support but apparently cannot deliver), or for ID (who sees their flagship title and engine technology get some incredibly bad press) - but also bad news for OpenGL itself. Who in his right mind will do any kind of mass-market development in OpenGL now, knowing that for anything less high-profile than "the new ID title" you are NOT likely to see AMD and NVidia scramble to support you?

Which leads me to wonder: is there anybody else out there who is seriously pushing OpenGL (and not just sticking to the v1.0 profile!)? Are you running into the same problems with driver support apparently just not being there?

Also, what did ID use to develop this game? Do they have access to beta drivers that fix all their issues? How come such drivers are not available for general use now?

Questions, questions...

ZbuffeR
10-08-2011, 03:30 AM
My interpretation is that Rage was polished for the consoles.
For the PC the development may well be done on bleeding-edge-unreleased-made-for-id-computers drivers from both nvidia and ati, which should have been out before the game :)
The QA testing for PC is indeed more involved than a console, but of course this is not an excuse to skip it.

Alfonse Reinheart
10-08-2011, 04:05 AM
While I have not yet seen any analysis of what went wrong

Wouldn't it be reasonable to wait to know what exactly went wrong before making any analysis or judgment?

As I understand, most of the performance issues around Rage center on "Megatexture", which is generally not a good idea in any environment where you don't have direct access to the hardware.

In any case, there have been plenty of major releases of games, OpenGL and non-OpenGL, that have prompted scrambles by driver writers to get new versions out. BF3 did it, as did CivV. Indeed, I recall that there was some driver issue around StarCraft II for NVIDIA last year or something. I don't see this as being anything more or less than those, and all of them were D3D games.

ID is also unfortunately in a market position where they're never wrong. If their game isn't working on a system, then it's clearly Microsoft/NVIDIA/AMD's fault, not ID. So even if they're effectively abusing the API or doing non-standard (or just il-advised) things, driver writers will conform their drivers to them. Not vice-versa, as with other game developers.

If anything, I'd say that the biggest problem is that it takes away from driver development time that would otherwise be spent on fixing real driver bugs. But there's not much that can be done about that.

glfreak
10-08-2011, 10:21 AM
Who in his right mind will do any kind of mass-market development in OpenGL now

Only expensive "legacy" software. Indeed some giants are switching gears now heading to the "right" direction.

So the ones who are to blame are Id, NVIDIA, AMD, and/or etc. etc.

OpenGL needs one and only thing. Something to blame. OpenGL simply needs a real SDK, not a collection of wrappers and image loading libraries...

Create a downloadable SDK for each platform that communicate with a minimal unified driver architecture and then we will have one thing to blame, the SDK. But guess what, if this ever happens then there will be none to blame. :)

Implementing a bug-free OpenGL requires full dedication from a big developer, not a team working on it as a secondary project. ;)

Good luck!

kyle_
10-08-2011, 01:06 PM
Who in his right mind will do any kind of mass-market development in OpenGL now

Only expensive "legacy" software.

That, 'embedded' guys (sorta kinda) or those targeting OSX.

Jeff Russell
12-09-2011, 11:52 AM
OP speaks the truth. OpenGL drivers on windows are just a mess. Certain vendors do better than others, but in my experience its been a total crap shoot as soon as you try to use any even remotely new feature. The fact that Rage has had big problems did not surprise me in the slightest.

We've had an OpenGL app in deployment for just over a year now, an art tool. It is basically written against OpenGL 2 with a few common extensions. Nothing super fancy. We have constant problems, it's a moving target. We require all our users to be up to date with video drivers, which helps most of the time but every once in a while a vendor will push out an update that breaks our app. GLSL compilation has been one of the main problems. sRGB color space another. On and on it goes...

I like OpenGL, I want it to stay. I'm optimistic about its future. But it needs a reset. Come up with an OpenGL 5 spec or something that totally breaks compatibility with the old version. Just start over. Please. When I hear vendors say things like "oh, we shouldn't deprecate features, we can continue to support the old ones just fine..." I look at the state of OpenGL drivers today and ask "O RLY?".

Alfonse Reinheart
12-09-2011, 01:36 PM
A functioning conformance test suite would be far more useful to this end than a backwards-incompatible API. And supposedly the ARB is working on getting one for 3.3. I say "supposedly" because it's supposed to have already come out.

Jeff Russell
12-09-2011, 02:03 PM
Conformance tests would be great! I hadn't heard about that. It will be very hard to get good test coverage, but it's better than nothing. Hopefully one will come out for GL 4.x rather than 3.3, and it will stay up to date.

There are a lot of other reasons I'd like to see OpenGL start over, driver quality is only one of them. Let's do both! Driver quality is a serious problem, more than just a road bump. We're probably going to experiment with a direct3d 11 build for future revisions of our app, if for no other reason than maintainability.

Alfonse Reinheart
12-09-2011, 04:52 PM
OK, let's assume that the ARB came up with an "OpenGL 5.0" that was not backwards compatible with previous versions. And that it came out tomorrow.

Who would support it? No one. A specification is just a piece of paper; until it is implemented, the specification means nothing.

So you'll have to wait 6 months (minimum) for it to be implemented. And since that's just a first pass at implementing it, it will be incredibly slow. Also, it will be buggy.

Since it will more than likely include some form of high-level shading language, it will also include a compiler for said language. So that compiler will be buggy too. You could make the language GLSL, but then you haven't improved anything. Even if you only use a lower-level shading language, that's still a completely new compiler that they need to write code for. Code that will initially be buggy.

It will take a good year or two to work out the bugs. The big ones will be fixed right away, of course. But the little ones, the outside cases, like the ones you hit in OpenGL now, those are the ones that it will take time to ferret out. Sure, a simpler spec will be simpler to implement, but since the implementation must be written from "scratch" (not the low-level parts, but the interfaces to them. Where 99% of all OpenGL bugs are), this will introduce a lot of bugs.

So you're looking at two years minimum before this fixes anything. And even that assumes that the IHVs drop all pre-5.x GL development and devote all of their GL resources to 5.0. That would of course be incredibly stupid, since there are still many applications that use OpenGL out there.

So you're effectively telling IHVs to support both the old API and the new one. The same IHVs that you say are not supporting the old API very well. Do you honestly think that giving them more work will help improve their drivers?

Equally importantly, why would they do it? Zero code out there would support GL 5.0. So why would an IHV invest real programmer time to support it? There'd be no real point; nobody uses it. Nobody would use it until it's implemented. And since nobody implements it, nobody would use it.

See the problem? It would solve nothing.

ZbuffeR
12-10-2011, 01:37 AM
Back on RAGE itself : just started playing it on my 2 years old PC (geforce GTX 275), absolutely no problem, 60fps framerate, way quicker than Fallout 3 for example.
The great part about PC is choice of components. The bad part about PC is the variety of hardware.

V-man
12-10-2011, 06:59 AM
I like OpenGL, I want it to stay. I'm optimistic about its future. But it needs a reset. Come up with an OpenGL 5 spec or something that totally breaks compatibility with the old version. Just start over. Please. When I hear vendors say things like "oh, we shouldn't deprecate features, we can continue to support the old ones just fine..." I look at the state of OpenGL drivers today and ask "O RLY?".

I'm assuming you are talking about nVidia.

That won't happen because nvidia doesn't want to break compatibility. It was a big stink back in 2008 when GL 3 was finally announced, it wasn't at all what we expected. It was just GL 2.2.

The guys at nvidia are very confident that they can provide high quality drivers with backwards compatibility. They also don't like GL 3 deprecating things that exist in their hw.

And apparently, their are some big software vendors who don't like deprecation either.

Jeff Russell
12-10-2011, 10:20 PM
You're quite right that it would take a while before something like "OpenGL 5" would be implemented from spec. And it would take a while longer yet before it was high quality software, relatively free of bugs. But at least we would get there. I would be happy to go through such a transition.

All the objections you've raised against an "OpenGL 5", direct3d has met and overcome.

First: GL 5 would be supported for the same reason that developers would use it: because it would be modern, lightweight, and better.

Second: It would not double the work of driver authors because old versions of OpenGL would no longer be under active development. New features for those versions would stop coming in and they could be frozen for the most part. D3D has done exactly this.

Third: As for shader languages like GLSL, they should be decoupled from the driver. ARB could create an intermediate byte code and supply a compiler front end for separate use by users. This allows for code obfuscation, offline compiling, *actually* unified syntax etc. as well. Again, like D3D.

Some developers have a great aversion to breaking backwards compatibility. I guess I don't. The world of software moves quickly and thrives on change. We'd be fine.

PS - As for Rage it does run pretty well now. I've been enjoying it :)

Alfonse Reinheart
12-11-2011, 01:33 AM
All the objections you've raised against an "OpenGL 5", direct3d has met and overcome.

Yes: because Microsoft implements half of it.

But also because D3D code gets tested 20x more often than OpenGL code. You can't find and repair bugs in a system that's never used; it has to be tested in order to be fixed.

Take that silly ARB_sampler_objects bug in ATI drivers. It took over nine months for them to even find out it existed. And it's taken them that much time to fix it properly. That's 18 months that a basic 3.3 feature has been non-functional.

Why? Because nobody used it! If there were 30 games released in the 5 months after 3.3 hit that used it, you can bet that they would have hit the bug. Those developers would have informed ATI of the bug and the exact circumstances of it. And the fact that these were actual professional game developers (instead of hobbyists on a forum) would mean that the problem would be urgent. Possibly to the level of issuing a driver out of cycle. But at the very least, it would be fixed in the next monthly release.

The problem with OpenGL is that its most recent iterations go unused. If it's not used, then it's not being tested. And if it's not tested, then it's not debugged. And if it's not debugged, then it's buggy.

Your GL 5 does nothing for this fundamental problem.


GL 5 would be supported for the same reason that developers would use it: because it would be modern, lightweight, and better.

Better than what? GL 4 would do everything that 5 does. GL 4 implementations exist and are relatively mature; GL 5 implementations don't exist and wouldn't be mature for years.


It would not double the work of driver authors because old versions of OpenGL would no longer be under active development. New features for those versions would stop coming in and they could be frozen for the most part.

Do you really believe that this is what most of the time that IHV's spend on OpenGL are doing? Implementing new extensions/features?

No, most of their time is spent on bug fixing, supporting new hardware, and performance optimizations. As well as having to pretend that up is down whenever Rage or some newfangled game comes out that pretends the API does whatever it says it does.

NVIDIA spent a hell of a time working on NV_path_rendering. And they'd have to re-implement it for GL 5.


As for shader languages like GLSL, they should be decoupled from the driver. ARB could create an intermediate byte code and supply a compiler front end for separate use by users. This allows for code obfuscation, offline compiling, *actually* unified syntax etc. as well.

And who's going to write that intermediate compiler? Unlike Microsoft, the ARB is a volunteer organization; they don't actually have resources. All they produce is paper. They have to contract out everything; even the conformance test they finally got around to making is being contracted out.

Are they going to contract out maintained for it too? With who? For how long?

Also, this doesn't guarantee squat as far as compiler functionality. Sure, you won't have basic front-end compiler bugs. But you still have all the bugs in the optimizers and other general stupidity. Looking at most of the GLSL bugs in the driver forum, maybe 20% of them are purely front-end compiler bugs.

The best you could say is that you wouldn't get NVIDIA's nonsense of using their Cg compiler for GLSL.

Oh, and you'd have to lose extensions too. At least in the high-level language. So the only way to use extensions is to use the "obfuscated" low level language.

And what does "*actually* unified syntax" even mean?


The world of software moves quickly and thrives on change. We'd be fine.

Who is "we"?

thokra
12-11-2011, 02:16 AM
The best you could say is that you wouldn't get NVIDIA's nonsense of using their Cg compiler for GLSL.

That I find at quite understandable. If Cg is a functional superset of GLSL (and I have no idea if that's the case) then developing a new compiler for GLSL can't be in the best interest of NVIDIA. In theory, if you can do a 1:1 transform of GLSL to Cg code, all you need to do is to alter the compiler front-end to include some transformation stage. Then pass the transformed source to the lexer, parser and so on. If I was to support GLSL and had a working compiler already, I'd attempt that too.

On the issue of OpenGL driver quality: As long as there aren't real incentives for top-quality GL implementations, IHVs aren't going to be as fast when it comes to bug fixing and improvements. Furthermore, they're not going to do anything if there aren't any economic incentives. Take Intel for instance. Years ago, they didn't give damn about OpenGL on Linux. Now, they pulled the first ever MESA driver into the kernel which reports GLSL 1.30. I think you can call the big ones many things, but they most definitely aren't charity organizations who push OpenGL forward and implemenet the specs simply because they're so generous.

Everytime when reading discussions like this one, I can't help but feel that some people disconnect software, especially free (not necessarily open-source) software and business. It's seems like walking into a shop and taking stuff for free while expecting the shop owner to do his best to keep the shelves full.

Jeff Russell
12-11-2011, 11:35 AM
But also because D3D code gets tested 20x more often than OpenGL code. You can't find and repair bugs in a system that's never used; it has to be tested in order to be fixed.

This is a very good point. OpenGL sees a lot less use these days, and so gets inferior test coverage. This is probably a more substantial effect than the complexity of the API in terms of driver quality. I still think a GL rewrite would improve driver quality in the long run, but again that's not the primary motivation in my mind for creating such an API.

The real question that needs to be asked is *why* is OpenGL seeing less use than Direct3D?

For years and years, Direct3D sucked. OpenGL was much better. SGI did a good job of mapping it to the hardware with version 1.0 and it showed. So Microsoft kept reinventing the wheel. They would start over, making big changes that broke code that used old versions. By the time they got to Direct3D 9, they were ahead, both in terms of features and user base. All of this despite the notable handicap of not being cross platform - an advantage OpenGL has always retained.

Can you imagine what D3D11 would look like today if it contained every single interface that Direct3D has ever used? This is the reward of rewrites.

It's been mentioned that Microsoft implements the front end of D3D, and this is true. It's a *good* thing. It benefits driver authors in a big way to simplify implementation, and benefits users of the API by enforcing a unified interface that isn't just a collection of extensions that have come onboard at varying times with varying support.

You mention the ARB just kicks out specs and has no resources to implement. Well, *get* some resources. Start an open source project with volunteers if money is the issue. Do *something* because the current strategy is clearly not working.


Better than what? GL 4 would do everything that 5 does.

What? No. Maybe you misunderstand. Take the next batch of features that GL 4 doesn't support yet and roll them into "GL 5". The advantage of taking a big non-backwards-compat leap forward is that you can include features that are new today as basic functionality. That is, the API design can actually reflect its usage on hardware. You can support older systems through a query/feature level system if you like (again, D3D does this).

I realize I sound like a D3D fanboy here. I guess I sort of am, though I don't like to admit it. I grew up on OpenGL, it's the reason I taught myself C so many years ago and probably the reason I went into programming as a profession. I've only recently started using Direct3D 9 and 11, but the difference (particularly for d3d11) is night and day compared to OpenGL. I wondered for a long time why people used D3D and not GL, but not anymore. If anyone reading this has not tried out D3D11, take a look. Read the docs, and write a simple app in it. When you come back to OpenGL it will be with a different perspective.

GL has some lessons to learn from the competition. It will ignore them at its peril.

Jeff Russell
12-11-2011, 11:44 AM
Oh, I guess I should add that I consider OpenGL ES / WebGL to be a noble experiment. Even though it's really just more of a subset of features from OpenGL and not really reinventing the API, I feel that even this small departure in spirit represents some progress in thinking here.

Alfonse Reinheart
12-11-2011, 12:15 PM
For years and years, Direct3D sucked. OpenGL was much better. SGI did a good job of mapping it to the hardware with version 1.0 and it showed.

And yet, for all of those years that D3D sucked, D3D was also being used. D3D v3.0 was utter garbage, yet it was also used. D3D v5.0 was minimally decent, yet it was used. D3D 6.0, 7.0 were better but still kinda crappy. Yet they were still used.

Why? Because it was Microsoft. Because they had the resources behind it. Because however terrible the API was, it actually worked.

Game developers will complain about an API, they will hem and haw, they will hold forth at length. But at the end of the day, what they care about is getting it done. And if a crappy API gets the job done, then they will use a crappy API to do that job.

The secret to D3D's success is not that it was constantly reinventing itself. The secret to its success is that it was more stable and reliable than OpenGL. It always has been.

And that is due primarily to its driver model.


It's been mentioned that Microsoft implements the front end of D3D, and this is true. It's a *good* thing.

Yes it is. This model is how D3D retains backwards compatibility: because Microsoft implements a conversion layer for older D3D versions to talk to new D3D version drivers. Without this model, you could not effectively change the API every few years and retain reasonable drivers.

Of course, it's also not a model you can use for OpenGL, because OpenGL is cross-platform. You can't do this kind of abstraction cross-platform.

And also because someone would have to write and maintain it.


You mention the ARB just kicks out specs and has no resources to implement. Well, *get* some resources. Start an open source project with volunteers if money is the issue. Do *something* because the current strategy is clearly not working.

Resources do not appear ex nihilo; they require lots of money. And Khronos is not exactly rolling around in cash.

And quite frankly, I wouldn't trust an open source project with something like this for multiple platforms. They've had hardware specifications for various hardware for a couple of years now, and their GL drivers are still inferior to IHVs. Even ATI's. So their track record on this point isn't exactly good.


It will ignore them at its peril.

And that peril is... what exactly? That OpenGL will be marginally used, particularly in high-end games? We're already there. That OpenGL is principally used for its only real strength: cross-platform development? Again, that's a bridge we've already crossed.

There's no further peril out there. OpenGL will survive just fine on being the only cross-platform alternative.

Also, need I remind you that the ARB has tried twice to rewrite the API, and both times they abandoned it in favor of keeping what they had?

And the second time, they squandered a golden opportunity to make up some ground over D3D, because it was during the D3D10 transition. D3D10 is locked to Vista, but because Vista underperformed, game developers were stuck with D3D9, even though a lot of D3D10 hardware was sold. If the ARB hadn't been trying to reinvent their API for two years, if GL 3.3 had been out 2-3 years earlier, it would have gone over much bigger with game developers.

But by 2010, Vista adoption was up, Win7 was out and selling well, and cross-platform game developers were stuck with D3D9-level tech for consoles.


Take the next batch of features that GL 4 doesn't support yet and roll them into "GL 5".

There are no more "features" for 4.x level hardware. Or at least, not any significant ones. Just look at 4.2; most of the stuff there is API cleanup: texture_storage, shader_language_420pack, etc. Indeed, the biggest "features" of 4.1 were separate_shader_objects and get_program_binary, which could have been implemented back in 2.0 (and NVIDIA even implements them in 2.1-level hardware).

Notice how Microsoft only does API rewrites when new hardware comes out. There's a reason for that.

Jeff Russell
12-11-2011, 02:43 PM
Heh. Well I guess by "peril" i meant "things will stay as they are". If you feel that the current status of OpenGL adoption is acceptable, then more power to you.

Also, amen to missing out on an opportunity during the d3d9 -> d3d10 transition.


There are no more "features" for 4.x level hardware.
Well, true. But again, we're talking about a future API, not a present one. OpenGL could take the opportunity to take the lead for once and try to define what "GL 5" hardware would look like. You can bet Microsoft will (again) if the ARB doesn't. Such an initiative would mesh well with an API redesign.

I have two questions now.

First: when the ARB tried (twice) to redesign the API, and failed, why did they fail? I've been hazy on the details there. I would be very depressed to learn that it was simply because people on the board didn't want to break compatibility.

Second: To Alfonse, and anyone else reading along - if a major API redesign is not the way to go, then what is? Or are we already on the track for success here? I think I've already made my own thoughts clear on this matter.

thokra
12-11-2011, 03:05 PM
Also, need I remind you that the ARB has tried twice to rewrite the API, and both times they abandoned it in favor of keeping what they had?

Just out of interest, do you remember on what grounds? Did they give any official statements?

I never understood why they didn't make a clean cut with 3.0. Honestly, who ports their entire vintage 1.5/2.0/2.1 codebase to 3.0+?

Alfonse Reinheart
12-11-2011, 04:09 PM
If you feel that the current status of OpenGL adoption is acceptable, then more power to you.

Define "acceptable"? I accept the fact that the current status exists. I accept the fact that the current status is unlikely to change appreciably in the near future.

Anything else is wishful thinking.

What I feel are two things:

1: Redoing the API alone will not help. And most of the things you would have to add that would help would help just as much without an API redo.

2: The ARB does not have the resources to do most of the things that would help.

Therefore, I "accept" that OpenGL's current status is what we can expect from it for the foreseeable future. And that's only if ARM-based Windows doesn't become popular; if it does, you can expect OpenGL to be banished entirely from the Windows ecosystem. ARM-based Windows programs will not have access to OpenGL at all, because you can't access OpenGL via WinRT.


First: when the ARB tried (twice) to redesign the API, and failed, why did they fail?

The ARB's internal discussions aren't exactly available for comment. These days, the only real interaction we get is at GDC and Siggraph, where they present some things and maybe drop some specifications on us. But here's an outsider's perspective.

The 3D Labs "OpenGL 2.0" effort didn't really seem (again, from an outsider's perspective) to get much attention. They made a presentation at GDC, but it was clear that some of their stuff was kinda pie-in-the-sky. Their equivalent to FBOs and texture specification was... pretty much unimplementable. Indeed, most of the problems with GLSL can be traced directly back to 3D Labs being way too forward thinking.

The Longs Peak effort, which was ostensibly led by NVIDIA and ATI, seemed more likely to succeed. There are a few older threads from 3.0's release, where members of the ARB tried to console the understandably peeved users about the failure. Here's a post from Barthold Lichtenbelt (http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Main=47703&Number=2433 07#Post243307), chair of the ARB at the time (and still is, I think), that explains his explanation of things.

Reading the entire thread can be interesting if you like to see naked anger, bitterness, and desperate attempts at damage control from certain ARB members ;) If you want to know how much my perspective has changed since those days, I used the username "Korval" back then.


Or are we already on the track for success here?

We are on track for... treading water. That's where the ARB has been since GL 3.0: treading water. Not sinking. Not swimming. Simply staying afloat.

A conformance test is probably the best improvement we're going to get.

Jeff Russell
12-11-2011, 05:49 PM
I accept the fact that the current status is unlikely to change appreciably in the near future. Anything else is wishful thinking.


We are on track for... treading water. That's where the ARB has been since GL 3.0: treading water. Not sinking. Not swimming. Simply staying afloat.

A conformance test is probably the best improvement we're going to get.

I guess that about sums up the nature of our debate. I freely admit to being a wishful thinker in this regard.

And aaah yes the epic Barthold Lichtenbelt post! I do now remember reading that when GL 3.0 came out, and much of the anger and frustration that followed. The thread is a (long) letter of frustration written by users not unlike me who worried that the API's leadership was unable to get things done and that the API would suffer as a result.

Your water treading analogy is apt. About the only thing that's changed since 2008 is that we do now see more frequent additions to the core spec. This is progress, of a sort.

PS - OpenCL comes up briefly in that thread as well. This is one area where I think the ARB made an excellent decision, separating that API from OpenGL but allowing them to intercommunicate. Microsoft did the opposite with their compute shaders, and I think may regret that decision down the line. But of course, they actually make changes to their API from time to time so they will have the option of correcting it if necessary later on :P

kRogue
12-13-2011, 03:15 AM
Alfonse really hit the nail squarely on the head when he noted that GL implementations are buggy at times because there is not a lot of code out there that tests the new-ish features of GL4 and at times GL3 too. I remember a time when gl_ClipDistance did not work on some hardware and driver combination and firing off an e-mail with example code got it fixed by the next release.

The "cleanup" GL to use a more object model is, at least in my opinion, already slowly being added to the GL spec. The biggest bit will likely come when all the non-fixed function pipeline parts of EXT_direct_state_access get mutated some into the GL specification. We already have setting values of uniforms of GLSL programs without binding the programs (though ActiveShaderProgram for program pipelines looks like a touch of a hack for a spec). Direct edit for sampler objects is in the spec too.

The main stinks for a lot of folks on the GL API is the bind to edit for textures, framebuffers, bufferobjects, and a few other object types. I bet that is going to be sorted out in the specification at the next release, or at the very least alot of it will be sorted out.

I freely admit though that I feel like GL is now in the mode of just trying to keep up to D3D... we got RWTextures in 4.2, we don't have in the 4.2 spec yet read-write to buffer object data (there is an NVIDIA extension though to do this and more). I would like to see in GL4.3:

(shameless copy-paste from http://msdn.microsoft.com/en-us/library/windows/desktop/ff476342(v=vs.85).aspx)
Coverage as PS Input Programmable Interpolation of Inputs - The pixel shader can evaluate attributes within the pixel, anywhere on the multisample grid

other things like reading stencil buffer value are still not in the spec, or as an extension.. which is odd considering that D3D10 I think had that.. maybe the hassle for how to fit the idea of a depth-stencil texture to a sample.. since the current bind to use model already has so many enums... adding a whole family of enums just to read some component seems kind of ugly... maybe be add that magic to sample objects.. but that is not nice since it also kind of acts like swizzle which is part of texture object :p The other bits of making command objects thingies and setting data in another thread would be really nice too... we can make multiple GL contexts to do the sharing, but really we want a special context type that cannot render, only set "object" state... We do have debugging in GL now pretty good too with GL_ARB_debug_output....

Oh well. Things are getting better me thinks. Might be that it is really getting better for the goal of having GL be well in embedded... as that is where money is now for GL.... Windows Phone is not exactly a popular platform really.

But just so you all know: as a general rule of thumb, GLES2 implementations are far, far buggier than GL desktop implementations. Moreover, the desktop GL API (be it core or compatibility profile) is far more pleasant to use and deal with than GLES2.

V-man
12-13-2011, 04:17 AM
But just so you all know: as a general rule of thumb, GLES2 implementations are far, far buggier than GL desktop implementations. Moreover, the desktop GL API (be it core or compatibility profile) is far more pleasant to use and deal with than GLES2.

Really? Then perhaps a fresh new API on the desktop is not a good idea.

Besides, we don't need to worry about a new API since there are no plans for it.

kRogue
12-13-2011, 04:52 AM
GLES2 is not fresh at all... it is essentially OpenGL2 (with all the craggy bind to edit stuff) intersected with core profile of OpenGL3 minus a lot of stuff.. i.e. using GLES2 is like the core profile restricted to OpenGL2 functionality minus a lot of stuff. That stuff that is not present in GLES2 includes: hardware clipping, almost all read back support (be it images or buffers), no mapping of buffer objects, much more limited modes of buffer objects. Lastly, GLES2's texture image specification API is brain dead. A fair amount of stuff one takes for granted on desktop is either gone or only available as an extension. Take a gander at the gles registry: http://www.khronos.org/registry/gles/ to see the list of extensions for GLES2 (and GLES1) of stuff that should have likely been in the spec.

Alfonse Reinheart
12-13-2011, 11:09 AM
Coverage as PS Input

We already have that. Technically, we had it for a while, but the GLSL spec was broken. The GL 4.0 specification mentioned the behavior of gl_SampleMaskIn, but the GLSL specification never mentioned it until GLSL 4.2.

V-man
12-13-2011, 12:10 PM
GLES2 is not fresh at all... it is essentially OpenGL2 (with all the craggy bind to edit stuff) intersected with core profile of OpenGL3 minus a lot of stuff.. i.e. using GLES2 is like the core profile restricted to OpenGL2 functionality minus a lot of stuff. That stuff that is not present in GLES2 includes: hardware clipping, almost all read back support (be it images or buffers), no mapping of buffer objects, much more limited modes of buffer objects. Lastly, GLES2's texture image specification API is brain dead. A fair amount of stuff one takes for granted on desktop is either gone or only available as an extension. Take a gander at the gles registry: http://www.khronos.org/registry/gles/ to see the list of extensions for GLES2 (and GLES1) of stuff that should have likely been in the spec.

It seems fresh in the sense that the drivers are much simpler and if they still produce very buggy drivers on that platform, then there is no hope for a fresh API on the desktop.

Isn't it better that all those extensions are not in GL ES? The drivers would be simpler.

Alfonse Reinheart
12-13-2011, 01:02 PM
I'd say the ES driver issues probably have more to do with the sheer proliferation of different hardware architectures than anything to do with the API's complexity. Even if you just look at ES 2.0, there is a lot of different hardware out there. Even though most iOS and Android devices are powered by PowerVR GPUs, they don't use the same chips. They have different SOCs, so Apple and the various Android device makers have to write different drivers. The entire driver won't be rewritten of course, but some of the higher-level components will.

And that doesn't count non-PowerVR architectures, like Tegra and so forth. At least on the PC, you're only dealing with really two drivers: ATI and NVIDIA (assuming you don't care about Intel).

Also, while it is easy to just tell someone "update your drivers" on the PC, that's generally impossible on mobile devices. They aren't updated as regularly as PCs, so if a device/driver has a bug, it will likely keep that bug for a good long time. Maybe forever; mobiles are more like laptops than PCs.

What's silly is this: OpenGL ES 2.0 already has a conformance test. Yet there are still plenty of driver bugs.

kRogue
12-14-2011, 09:26 AM
I'd say the ES driver issues probably have more to do with the sheer proliferation of different hardware architectures than anything to do with the API's complexity. Even if you just look at ES 2.0, there is a lot of different hardware out there. Even though most iOS and Android devices are powered by PowerVR GPUs, they don't use the same chips. They have different SOCs, so Apple and the various Android device makers have to write different drivers. The entire driver won't be rewritten of course, but some of the higher-level components will.


I cannot say what Apple does, but all the other SoC maker folks that I've dealt with that use someone else's IP for the GPU's more or less take the drivers almost directly from the GPU designers. Some will add functionality to the drivers, but the usual case is very, very little is added, if anything. Typically, what an SoC maker needs to do is make EGL or something like EGL and to that effect most of the GPU makers provide entry points so one can do what is needed. The lists of GPU's that is much bigger than SGX in the Android world: both Qualcomm and Broadcomm have their GPU offerings... and naturally there is more ARM's Mali, Vivante and MORE!

The GLES2 driver breakings I have seen in released hardware:
buggy, unreliable GLSL compilers buggy FBO behavior lots of bugs in support for floating point textures, though half floating point is more likely to work often

I shudder to list what I have encountered in alpha and beta hardware/driver combos. Have not yet dealt with Freescale iMX 6 series, but the 5 series and before where nightmarishly bad.