GL 3 & D3D: The War Is Over

dor00 posted this link on a different thread, but I thought it was worth it’s own futures discussion here.

http://www.tomshardware.com/reviews/opengl-directx,2019.html#

I think this is an interesting and reasonably well written retrospective, but my conclusions are vastly different.

I believe OpenGL will exist 5 years from now and it will still be the foundation API for portable graphics for many types of apps. Particularly for the app with a life expectancy of > 5 years.

Game developers are a fickle lot, and they will go where market advantage takes them. Direct3D has been a resounding success for them, but if something better comes along, they will vanish like a fart in the wind (to quote Warden Samuel Norton).

If I had to guess which 3D graphics API will be more relevant in 5 years, I would not put my money on D3D10 or D3D11. In 5 years, OpenGL may be the only API that still looks more like a 3D graphics API than a parallel programming language with some interesting (if bizarre) standard library functions and ISA intrinsics.

Well, both APIs are failing at the same time. They are only protected from their own incompetence by the incompetence of the other. Here’s another article about DX10’s failure:
http://www.maximumpc.com/article/feature…_save_pc_gaming

I don’t know exactly what it means, but when developers reject the latest version of an API, that is not good.

It is clear that Autocad will never let OpenGL move on, and for some reason MS has suicided DirectX. This leaves the market wide open for something new to come along. I would like nothing more than for that something to be Intel’s Larabee chip, but I do not believe they will succeed.

I think things will probably struggle on as they are now, with no clear-cut good solution. The way technology is getting more and more fragmented and proprietary ensures no one will cooperate to create a real standard.

Firstly, I think JC is slowly losing the pulse of the hardware. “The things that you get in there are the geometry shaders and a few other things”; yes, because that’s all DX10 is, of course JC. He also never touched on DX11, which is a superset of DX10.
(I also no long put any real stock in what Tim Sweeney says either btw)

Also, I feel your claim that ‘developers’ are rejecting it as less than truthful. While the list of games isn’t huge I would say this is more down to the Vista requirement than any fundimental problem with the API. It’s not ‘rejected’ as ‘not cost effective’.
There is also the large legacy issue to consider

However, the rejection or otherwise of D3D10 isn’t really an issue, as MS haven’t “suicided DX” as DX9 is still going strong and games are still being made to use it and having seen their mistake with DX10 and problems MS are moving to do what MS do best; fix it. (for example; the people who made the choice to link D3D10 with Vista only no longer work there…)
(also Vista adoption continues to climb, up another ~3% according to W3 Counter, far out pacing OSX and Linux and taking it from XP)

As for what 3D API would be relivant in 5 years time, well I have to agree with Cass that DX10 and DX11 won’t be, it’ll be whatever MS have out then.

OpenGL might well still look like a 3D API, the problem is if the current trend is to believed it will look like a 3D API which was designed 20 years ago and not in any way reflecting the hardware under it. Mean while those who want a cutting edge 3D API will be using D3D1x and those who want to treat the ‘GPU’ as a large parrellel array will use either (a)whatever MS comes up with, (b) OpenCL or (c) whatever propritary system might exist then.

As an aside; Autocad has a D3D renderer now so can we please stop trying to blame OpenGL’s stagnation on the CAD programs. Thanks.

The idea that there is a ‘struggle’ is also a strange one;

  • For max Windows coverage you want DX9 or middleware
  • For consoles it’s whatever the native API is or middleware
  • For Linux… well, most game companies don’t care but it’s OpenGL
  • For OSX… slight more companies care, but it’s still OpenGL.

It’s not really that hard, the solutions are pretty much self evident.

As for standards, well, we have them, if anything we have more than we’ve ever had.

  • D3D is a standard for 3D Gfx on Windows.
  • OpenGL is also a standard for 3D Gfx
  • OpenGL|ES is a mobile standard for 3D Gfx

Honestly, I don’t see the problem.

GL can’t ‘lose’ (whatever that means) unless a cross platform alternative comes along. It really is that simple.

DX can’t lose either cos it currently has it’s advantages over GL, and MS is desparately clinging to DX as it’s main products are failing in the market place.

IMO this ‘API war’ thing is just as ridiculous as the ‘PC gaming is dieing’ fad. Nothing has changed. It’s only a war of words. And IP heh.

Given that there is no organization with a strong interest behind OpenGL, I wouldn’t bet on it of gaining any relevance.

Also, given the lack of an eco system of tools, as well as consistent implementations across vendors is also not very trust instilling.

There are quite a lot of CAD and DCC and plugin products that use D3D so stop blaiming CAD companies.

DX10 isn’t being rejected. It doesn’t deliver better graphics and more polygons. For some reason people think that DX makes GPUs faster and prettier graphics. Moron users.

Long peaks died but some guys created a backdoor
http://www.opengl.org/registry/specs/EXT/direct_state_access.txt
but I still thinks the situation sucks.

I’d place a bet that OpenGL will be around in 5 years simply because of the mobile space.

Well, yes, OpenGL|ES will be, but OpenGL|ES as we all know != OpenGL.

I will take Tim Sweeney’s word, as he actually has some input on the API’s, just as JC does. IMO some form of software rendering would be about the best thing for PC gaming as a whole, to get rid of the need for DX9,10,11,1000 to have feature “X” so I can see pretty gfx because the hardware has support for it vs. it doesn’t.

e.g with GS you are still limited to what Nvidia/ATI have built into the GPU. Or FBO’s on older hardware. The list goes on.

Now if they can make hardware that just runs the code and allows each coder to come up with their own methods for rendering we might actually see graphics that don’t look so similar and allow some creative ideas to flow again.

I am biased to this as I see the need to allow gamers to have access to basic hardware without the hassle of knowing what API version the GPU is compliant with and if it has support for the nice eye candy they want to see. I would rather have the games be limited to FPS because they need more cycles on there CPU/GPU hybrid card (whatever you want to call it) than feature limited. Playable to one gamer may not be the same as to another gamer. And if I have 5 year old hardware I can still see the latest graphics on a game, but it may run slower, try that with today’s GPU environment.

So you got a whole bunch of people at Microsoft that make APIs. They have the idea that “we made this API that seems to be doing everything everybody really needs but we need to keep doing a new API every year or two because that’s what we do, otherwise they might dissolve our department” or something.

How unlike ARB.

And now I’m done laughing, you do realise that the look of things is primarly down to art style from the textures, yes? I mean, do you really think HL2 looks the same as World in Conflict which looks the same as Company of Heroes which looks the same as Dawn of War which looks the same as TF2?

Because ALL those games run on DX9 hardware and they all look different because of art.

I’m starting to get very bored of people saying ‘omg! if we could program everything think of what we could do!’. While the increased flexibility might make a few hacky things such as alpha blending go away the look of things will STILL revolve around art direction.

So, if you want to place the blame somewhere place it at the feet of (a) the buying public who want ‘realistic’ looking games and (b) art directors/leads who don’t bother investing in their own ‘style’.

Also, the whole ‘programmer can make their own engines!’ is a pipedream, one which might suit the bedroom hacker in his/her room but once you hit the real world you suddenly find yourself with budgets and deadlines and the luxury of fine tuning your code for graphics card number 47 goes away. Why do you think middleware is so important and so many people licence the Unreal Engine? They don’t have the time or the money, and no amount of programmable backends are going to change that; hell, it’s annoying enough having to code against the 360 and the PS3, never mind the asplosion of combinations the PC world throws at you.

Here’s the thing, though: people who license the Unreal engine get its source code, and they often change it to suit their needs.

That’s not possible with OpenGL or DirectX.

I think you’ve hit the nail on the head, Rick. The next generation of rendering technology is much more likely to be licensed via engines and evangelized via open source implementations than delivered as a standard.

Bobvodka’s pipe dream is what gave us Doom, Quake, and Unreal engines that truly created the consumer 3D hardware acceleration market from nothing. There’s no basis for doubting that a few clever software developers can’t fundamentally change our perception of real-time graphics.

Ultimately this isn’t about what any particular person wants to happen though. There are fundamental technology shifts happening that are making standardized 3D rendering APIs less relevant for cutting-edge rendering. Because the underlying devices are becoming too general for such limited abstractions.

To the topic of this thread again though, I think even if OpenGL is implemented as a pure software layer on a very programmable device, there is a large segment of the market that will continue to be well served by it, even if it continues to change at a glacial pace. The D3D core market is much less likely to hang on to that abstraction as long. It will lose huge mindshare as soon as the major engines move away, as Tim Sweeney is already adamant they will.

So far the only announced chip that allows you to write your own custom 3d pipeline is Larrabee. So far there is no public information how well it will perform for this use case. Let’s assume it can compete with “traditional” GPU designs. Will the other major players in the GPU market provide the same level of programmability at this timeframe? But the more interesting question is if they will all use the same ISA then? I don’t think so and this puts us back to a point where we need a hardware abstraction again or we will end in a new programmer’s nightmare.

I have no doubt that if all major GPUs designs go the way that Intel has chosen Direct3D will follow and provide some kind of abstraction for these. Maybe Microsoft will finally rename it as it was already planned for the version that becomes Direct3D 10.

I agree that the Direct3D core market seems to have less problems jump from one API to another. But until Direct3D 10 these hops were from one version to another and quite easy. As soon as the jump becomes larger (like that one from 9 to 10) less people were willing to move. As the step from 10 to 11 will be a small one again I see that most people that are already on 10 will jump again but others will still stay with 9 maybe until Windows XP doesn’t need to support anymore. The same things will happen again if another big break happens in the Direct3D world. If there will be a future version that support “fully” programmable GPUs like Larrabee but doesn’t provide any compatibility for older GPU people will move very slowly to such a new version.

With all the regular releases of new better PC hardware it always looks like that the PC game market moves at a high pace. That may be right for the high end gamers that update their systems at the same speed. But the broad mass of potential customers doesn’t update that often. Therefore you can’t set your requirements too high. This leads to my conclusion that 5 years are not enough to kick traditional 3d APIs out of the window. We will still need them for backward compatibility even in the games market.

I think you’re right in that Direct3D will stick around to provide an abstraction. Just looking at D3D11, I believe a whole new class of algorithms will be possible. The pixel shader being able to perform scattered writes and the addition of a compute shader leads me to believe that the API should be renamed to DirectGPU, as it seems to be moving away from a graphics API…

It’s unclear how quick the transition will be. An abrupt transition would favor Intel, while a slower one would be better for NVIDIA and ATI. Consoles could dramatically accelerate the transition or stall it for several more years. No way to know how that will go down yet though.

Maybe Intel should take their Larabee solution to the console market. Of course, they would only be able to sell if Sony or someone is willing to sell another console.

On the PC, it would be harder to convince users to upgrade and game developers to use their SDK to make custom looking graphics.

http://en.wikipedia.org/wiki/Larrabee_(GPU)

Also, the benchmarks shows that it can just render todays games at 60 FPS at 1600x1200
That seems pretty weak if you want to do something like raytracing.

Rumour is that Intel is trying to do exactly that. They’ve been in talks with Microsoft to build a 48 core Larrabee processor in the next XBox.

But Larrabee isn’t the only platform for OpenCL. OpenCL will work for Larrabee, traditional GPUs, and traditional CPUs. That will certainly make the transition less painful.

Also, the architecture that you need for ray tracing is completely different from the architecture that you need for rasterizing. Today’s GPU hardware is great for ray tracing (or at least, better than todays CPUs with only a handful of cores, at best) but OpenGL and DirectX aren’t. ATI and Id are both working on hybrid engines that use ray casting/voxels for the static geometry and traditional rasterizing for dynamic geometry. Jon Olick from Id is claiming that their sparse voxel structure can deliver a generational skip in geometry detail. This level of detail is actually more efficient with ray casting, because in a traditional rasterizer each polygon would be smaller than a pixel.

So, while Larrabee may be slower at traditional rendering, it will be faster at other kinds of rendering. That kind of flexibility could be well worth the drop in traditional benchmarks.

Also note that the demos I linked to aren’t using Larrabee; they’re using ATI and NVidia cards. And they’re bypassing OpenGL and DirectX, because they need extra flexibility.
Larrabee is not the only chip that allows you to write your own pipeline. We don’t even need OpenCL to do that – we can do it today with CTM and CUDA – but OpenCL will certainly make the task far more manageable.

V-man, I do recall reading some estimates of game frame rates based on a hypothetical Larrabee part with a certain number of cores and a certain clock rate - with those parameters basically made up to make it interesting. However I have not seen any yet using production hardware, for obvious reasons - and I have every reason to suspect the values seen there will not be the same ones as in the hypothetical example.

When you see some (real benchmarks on production hardware), could you post the link ?

I started a thread where people could post some more details about their OpenGL applications and what needs they would like to see addressed in future spec releases and implementations. It’s here in the suggestions forum and is titled simply “Talk about your applications.”

http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=246133