Does the current GL 4.1 core beat Direct3D 11 capabilities?
I’m asking this because I noticed some recent video cards support D3D 11 and OpenGL 3.2 while its vendor already have GL 4.1 drivers for more advanced cards. So I figured out it’s GL 4.1 that requires more capable hardware than what D3D 11.
There are already some OpenGL 4.1 features that D3D 11 does not support but I also agree with Groovounet that D3D 11 has still some capabilities not available in OpenGL 4.1 but that will change next year.
When the currently developed extensions will be included in OpenGL 4.2 I think we can say that OpenGL is a superset of D3D (even though such multithreading capabilities won’t be included in OpenGL, however that is more an API feature than a feature of the underlying hardware architecture).
Still, how I understand your question, you are much more interested whether they target the same hardware generation. From this point of view, you can take OpenGL 4.1 as equivalent with D3D 11.
No, D3D 11 hardware is capable of GL 4.1 if the driver also supports GL 4.1. Currently, AFAIK both NVIDIA and ATI has GL 4.0 support in their drivers and have some GL 4.1 support at least in their beta drivers.
Multi-threading support in GL is now working in a way that you can have a separate GL context for every single thread and yes, it exists in GL for long time. D3D just added such support with their latest releases and it is now a bit more superior compared to GL now.
There is possibility to read from the current framebuffer inside a shader if you attach the framebuffer texture for reading. Of course, hardware has some limitation over this. For more details check these extensions:
So this means a D3D 11 hardware is capable of GL 4.1 core features regardless of the driver support?
What do you mean by that? OpenGL 4.1 specifies features that are in D3D 11, so any D3D 11 hardware would be capable of doing GL 4.1. But without driver support “capable” doesn’t mean anything.
Talking about multi-threading support, cannot GL context be threaded anyway? I thought it’s something already exists in GL for long.
That’s not the kind of multithreading they’re talking about.
In D3D 11, you can create “deferred” contexts. Rendering commands issued on deferred contexts are not executed immediately, but instead wait until the non-deferred context is explicitly told to execute the deferred commands. This allows multiple threads in the user application to be used for rendering purposes.
You can’t imitate this in OpenGL (yet). Even with display lists, since display lists use state from the current context while rendering, whereas each D3D deferred context has its own state.
It shouldn’t be too difficult to add, though. There just needs to be a parameter added to glX/wglCreateContextAttribsARB that will create a deferred context (you will need to pass a specific share context so the contexts share objects). Then there needs to be a glX/wgl function that is told to render a deferred context.
The difficult part revolves around issues of multithreading into GL contexts. What happens if objects created in one context are modified deleted in another. D3D doesn’t really have that problem, since their objects are, for the most part, immutable.
OpenGL display lists are pretty much deferred context in D3D 11 that worked for antiquated versions. Unfortunately they are deprecated instead of being improved to fit in the new core specification.
I could simply emulate the D3D deferred context partially by having every GL context its own display list and execute them in different threads.
OpenGL display lists are pretty much deferred context in D3D 11 that worked for antiquated versions.
No, they aren’t. D3D deferred contexts have different state from one another. The functioning of a deferred context cannot be affected (outside of changing the objects in use) by the functioning of another deferred context. You cannot make a deferred context render to a different framebuffer or whatever from outside that context.
You very much can with a display list. A display list contains only the commands it was compiled with. It does not contain the list of state that those commands were compiled with at the time of compilation. Because of that, the results of executing a display list can depend on state that was set after the display list was compiled.
And yet at the same time, display lists cannot be affected by changes to objects that they use. Binds are not compiled; they are executed immediately. So all object contents must be read by the display list. The contents of buffer objects and textures have to be stored in the list separately from the original objects used in building the DL.
That means you cannot change the contents of a buffer object and expect the DL to use the new contents. You cannot have one DL do a transform feedback into a buffer, then have another DL read from that buffer for some operation.
Display lists are like deferred contexts in only the most superficial way. The differences are quite substantial.
I could simply emulate the D3D deferred context partially by having every GL context its own display list and execute them in different threads.
Partial emulation means nothing. The differences remain. Plus, it would be incredibly slow, since compiling a DLs is not exactly a fast operation.
No. We already have multiple context support in OpenGL. And you have to make multiple contexts in order to issue commands in different threads anyway.
All you need is a way to create a deferred context and a way to have the main context render execute what was rendered in the deferred one. It’s really that simple.
Re-establishing something like display lists would be entirely unnecessary. Just let them go; they weren’t a good idea and its best to leave them gone.
Multiple contexts is not black magic. You just use the context creation again. The new context creation function even takes a context to share objects with. This is a first-class feature of the API.
Display lists are not as well understood as multiple rendering contexts.
Using it is about ok, using it efficiently… challenging!
Using it efficiently to do what?
We are talking about co-opting multiple contexts for a different purpose. This would not involve things like multithreaded texture upload and the like. This is all about building self-contained state (as opposed to state that can be altered by other state like display lists) and rendering commands and executing them later.
There’s no question of locks, multithreading, and other things like there are with issues of multithreaded uploads and such.
Can you provide a guide line? I mean a guide line that nobody will would complain about. Definitely challenging!
What do you need guidance about? You do it like you do in D3D 11. A D3D deferred context is almost exactly like an OpenGL rendering context. The only differences are the ones I’m proposing be removed by the context creation flags (namely, the delayed execution).
The only things that could possibly get in the way are issues relating to mutable state objects.
We You are talking about co-opting multiple contexts for a different purpose. [/QUOTE]
Yes. Because OpenGL rendering contexts are almost identical to D3D 11 deferred contexts. Unlike display lists, which are almost exactly the opposite.
If you’re looking to port D3D 11 functionality, you should look for the most similar GL construct and modify that, rather than looking for the least similar construct that happens to sound something like the original.
The correct way to port this functionality is to use rendering contexts, not display lists.