DirectX 11 preview available, impact on OpenGL?

Since the DirectX November 2008 SDK contains a preview for DirectX 11, I was wondering how those hardware features will be exposed in OpenGL, possibly with a coarse time line? (As in late next year, or something like that)

http://www.microsoft.com/downloads/detai…&displaylang=en

OpenCL will be the compute shader equivalent (well, minus the scattered writes in the pixel shader I suppose - extension?)

I’m willing to bet that NVIDIA will be good about putting out an extension for the tessellation feature.

Isn’t the tesselator the same mechanics Ati implemented years ago as TruForm but abandoned, because it doesn’t work well with normal&parallax mapping?

http://en.wikipedia.org/wiki/TruForm

If it is, I’d say DX11 has close to 0 impact on anything, not only OpenGL :wink:

Nope, the D3D11 tessellator is more general than that one - check the gamefest slides if you’re interested. :slight_smile:

Maybe we’ll get to use it first in OpenGL with an extension. :smiley:

The first I disagree, and the second would just be sad :wink:

But seriously, how would this be exposed in OpenGL, assuming that OpenCL will be there for the compute stuff?

My guess is that we’ll see the addition of similar GLSL stages to track the tessellation feature of SM5.

Compute on the other hand is harder to figure, being the undiscovered country that it is. The OpenCL spec is to be released during Siggraph Asia 2008, so hopefully we’ll have an implementation to play with presently.

Compute on the other hand is harder to figure, being the undiscovered country that it is.

Compute stuff interacts with a graphics API only in where it writes its data. We already have had confirmation that OpenCL will be allowed to write to OpenGL buffer objects, textures, and renderbuffers. That’s really all you need.

True enough. Compute in both APIs looks to be more or less orthogonal to their respective shading languages. The ability to share data with the graphics pipeline seems to be the only requirement.

DirectX 10/11 doesn’t matter. No one is going to develop a game that only runs on Windows Vista. MS themselves are already working on a new OS because Vista failed. DirectX 9 is the most viable API right now, and is probably going to be for another 2 years.

In any case, I don’t think tesselation will make a big impact. When they start showing you closeups of character faces, you know they don’t have anything that will even be noticable in an actual game.

I just heard about D3D10 Level 9 (D3D10 API running on DX9 hardware), that will be included in D3D11. So MS decided they needed to make their latest API available on older hardware too. Maybe they will even port it to XP. I wouldn’t bet on it, but it’s possible D3D10 and up will become a more viable option in the near future.

Since the release of OpenGL 3 not a single day goes by, that i don’t curse the fact to be forced to develop for Linux, at the moment. Just yesterday i changed one part of a program, to improve graphics quality, and broke several other parts, due to state trashing. This API is simply a pain in the ass.

Jan.

IMO, as for tessellation, if this is a fast path in DX11 hardware, ie the data route through hull->tessellation->domain stays in internal queues instead of touching main memory then if you don’t use it you are likely screwed (those who choose to make use of the hardware will have a tremendous advantage). Next generation consoles are most likely going to be DX11 or better hardware, so DX11 feature set will likely be standard and used by all major game developers. Seems like tessellation provides a huge reduction of memory per level of quality, and a possible huge performance improvement per level of quality as well. The tools to migrate from triangle to patch based geometry are well established and well used. I can see tessellation also helping to solve the streaming geometry problem in that you can control geometry quality via texture streaming of displacement maps.

So I’m placing my bet that DX11 rendering features (like tessellation) end up as GL vendor extensions, and later ratified by ARB.

MS themselves are already working on a new OS because Vista failed.

Microsoft is always working on a new OS. And that OS will be built on Vista, and it will have the same DirectX incompatibilities as Vista.

While I agree that most game developers will stay with Direct3D 9 more and more add an additional Direct3D 10 path. There are even some titles in development that will require Direct3D 10.

Aero II will use a newer Direct3D version than Aero I. As Microsoft doesn’t want to cut out older hardware 10 Level 9 was needed. AFAIK the initial plan wasn’t to make it public but with the upcoming Direct2D they thought it might be useful for other developers, too. Anyway from the technical side 10 Level is a bridge driver that translates Direct3D 10 WDDM level calls to Direct3D 9 WDDM level calls. Therefore this cannot work on Windows XP. The only thing can work on XP is a modified runtime and the new Direct3D 10.1 level software only rasterizer. But AFAIK there are currently no real plans to do this either.

As you might have heard, D3D11 supports D3D9 and D3D10 hardware through so called “feature levels”.

I’ve just tried D3D11 Beta on the D3D9-level hardware and created a D3D11 HAL device successfully with D3D_FEATURE_LEVEL_9_3. Developers now may take advantage of using this new API while preserving low hardware requirements. This is probably the best thing I’ve heard about these days. There is also an optimized software rasterizer called WARP which is capable of running D3D_FEATURE_LEVEL_10_1 and actually can render simple scenes at reasonable framerates (they even tried Crysis on it, see here). Only the ridiculously slow reference rasterizer supports D3D_FEATURE_LEVEL_11_0 on my system.

That’s a useful thing for D3D certainly, but only because each D3D version keeps changing the API in non-backwards compatible ways. It’s not useful for OpenGL.

Korval, can you clarify what you meant there ?

With 3.0 OpenGL is introducing the context version info as part of the create context call, explicitly so that the API can be streamlined in future releases.

While there is a strong desire to avoid “change for change’s sake” amongst the Khronos working group members, it’s inevitable that changes in the API will appear in order to address long standing limitations - those changes will be tied to the context version.

Whereas in the past, no version of OpenGL could ever remove anything, because there was no way for an application to specify its intent.

The idea with the D3D feature is that you can write DX11 code and it will automatically be translated for DX10 or 9 hardware. So any API cleanup you get in DX11 is propagated to lesser hardware.

The OpenGL 3.0 context creation is quite the opposite of that. Notably, if you want to use GL 3.0 API cleanup features, you have to use either a real GL 3.0 context, or hope that it gets exposed as an extension for lesser hardware. Nothing in the actual context creation model ensures the latter, unlike the DX11 model.

OK I see what you are saying, on DX11 there is some set of API’s that don’t translate into specific hardware capabilities, but are considered generally useful even on older hardware. It will be interesting to see how that plays out.

My expectation for GL 3.0 and onward is that it will stick with the SM4 hardware floor, and if new features come along that are not tied to hardware or could be expressed by extension in either the 2.x or 3.x worlds, they can be written to become available there.

If you have any specific examples of API improvements that would help you in your OpenGL apps, please be sure to verbalize them over on the “talk about your applications thread”.

Rob, what if anything at this point can be said about SM5 development? Will these new features (too numerous to list here) be tacked on to later 3.x point releases or the next major version? Any timeline or roadmap at all in this regard? I’m sure even off the record speculation in this area would be inspirational to a lot of folks around here.

Btw, thanks for taking an interest. Little guys like me wouldn’t really have a voice if it weren’t for forums like this one.

One thing that was interesting about NVIDIA’s G80 / 8800 introduction (late 2006 iirc) was that they had a pretty complete set of vendor extensions for SM4 capability available at launch under GL 2.x. Over time a set of those found their way into the OpenGL 3.0 core spec along with some other new things.

If a comparable situation were to recur with SM5, my hope would be that the time from extension availability to core integration would be reduced (a lot). That said, if something goes in the core spec, that only SM5 hardware can provide, then we’ve raised the hardware floor for that version of OpenGL, which may be undesirable.

A middle ground would be to roll up a set of key SM5 capabilities into a single uber extension and to advocate for that extension’s implementation across IHV’s, eliminating the a-la-carte wiggle room that we get when there is one extension per individual feature. That path would let you have a GL 3.0 or 3.1 spec that addresses SM4 hardware, and an uber-ext that would encapsulate SM5 stuff.

Do you have a top-5 list of SM5 capabilities that are important to you ?