For years and years, Direct3D sucked. OpenGL was much better. SGI did a good job of mapping it to the hardware with version 1.0 and it showed.
And yet, for all of those years that D3D sucked, D3D was also being used. D3D v3.0 was utter garbage, yet it was also used. D3D v5.0 was minimally decent, yet it was used. D3D 6.0, 7.0 were better but still kinda crappy. Yet they were still used.
Why? Because it was Microsoft. Because they had the resources behind it. Because however terrible the API was, it actually worked.
Game developers will complain about an API, they will hem and haw, they will hold forth at length. But at the end of the day, what they care about is getting it done. And if a crappy API gets the job done, then they will use a crappy API to do that job.
The secret to D3D’s success is not that it was constantly reinventing itself. The secret to its success is that it was more stable and reliable than OpenGL. It always has been.
And that is due primarily to its driver model.
It’s been mentioned that Microsoft implements the front end of D3D, and this is true. It’s a good thing.
Yes it is. This model is how D3D retains backwards compatibility: because Microsoft implements a conversion layer for older D3D versions to talk to new D3D version drivers. Without this model, you could not effectively change the API every few years and retain reasonable drivers.
Of course, it’s also not a model you can use for OpenGL, because OpenGL is cross-platform. You can’t do this kind of abstraction cross-platform.
And also because someone would have to write and maintain it.
You mention the ARB just kicks out specs and has no resources to implement. Well, get some resources. Start an open source project with volunteers if money is the issue. Do something because the current strategy is clearly not working.
Resources do not appear ex nihilo; they require lots of money. And Khronos is not exactly rolling around in cash.
And quite frankly, I wouldn’t trust an open source project with something like this for multiple platforms. They’ve had hardware specifications for various hardware for a couple of years now, and their GL drivers are still inferior to IHVs. Even ATI’s. So their track record on this point isn’t exactly good.
It will ignore them at its peril.
And that peril is… what exactly? That OpenGL will be marginally used, particularly in high-end games? We’re already there. That OpenGL is principally used for its only real strength: cross-platform development? Again, that’s a bridge we’ve already crossed.
There’s no further peril out there. OpenGL will survive just fine on being the only cross-platform alternative.
Also, need I remind you that the ARB has tried twice to rewrite the API, and both times they abandoned it in favor of keeping what they had?
And the second time, they squandered a golden opportunity to make up some ground over D3D, because it was during the D3D10 transition. D3D10 is locked to Vista, but because Vista underperformed, game developers were stuck with D3D9, even though a lot of D3D10 hardware was sold. If the ARB hadn’t been trying to reinvent their API for two years, if GL 3.3 had been out 2-3 years earlier, it would have gone over much bigger with game developers.
But by 2010, Vista adoption was up, Win7 was out and selling well, and cross-platform game developers were stuck with D3D9-level tech for consoles.
Take the next batch of features that GL 4 doesn’t support yet and roll them into “GL 5”.
There are no more “features” for 4.x level hardware. Or at least, not any significant ones. Just look at 4.2; most of the stuff there is API cleanup: texture_storage, shader_language_420pack, etc. Indeed, the biggest “features” of 4.1 were separate_shader_objects and get_program_binary, which could have been implemented back in 2.0 (and NVIDIA even implements them in 2.1-level hardware).
Notice how Microsoft only does API rewrites when new hardware comes out. There’s a reason for that.