At some point this nonsense should just stop. Introducing a complete DSA API is the perfect opportunity to realize what was originally planned 4 years ago.They're already maintaining two specs anyway.
At some point this nonsense should just stop. Introducing a complete DSA API is the perfect opportunity to realize what was originally planned 4 years ago.They're already maintaining two specs anyway.
Hey, speak for yourself!I second the rest of what you said, but let's avoid the grandstanding too. Each of us represents our own suggestions and opinions -- no others.
Over the years, I've come to appreciate the difficult job the ARB has. They're never going to please anyone completely (and they're limited by corporate budgets and priorities), so if anyone out there's holding their breath for complete nirvana, they might as well sit down before they pass out. Just throw your opinion out there for consideration, and don't get your panties all in a wad if you don't get everything you want. You're not the only fish in the pool.
A GPU is inherently multi-threaded - following AMD rhetoric it's even ultra-threaded. The problem is the single command buffer you can fill. Multi-threaded OpenGL can be done using two or more contexts in different threads. However, if you have a single command buffer and you switch threads it's nothing more than adding GL commands sequentially to that command buffer depending on what thread is currently pushing. If you have multiple command buffers, as with multiple GPUs, you can have true multi-threading.Can we please have multi-threading in OpenGL?
You can - just create a separate GL context for each thread, belonging to the same share group. You can then have some threads create objects, fill buffers, etc. while another does the rendering. This is precisely what GL_ARB_sync is for.Can we please have multi-threading in OpenGL?
I don't understand why everybody is asking for multi-threading in OpenGL since D3D has it. FYI: OpenGL always had multi-threading support.
D3D's multi-threading approach is not much different. Only deferred contexts are something that OpenGL doesn't have, though it's kind of similar to display lists. Also, deferred context promise more than what they actually achieve as in practice they barely give any benefit.
Disclaimer: This is my personal profile. Whatever I write here is my personal opinion and none of my statements or speculations are anyhow related to my employer and as such should not be treated as accurate or valid and in no case should those be considered to represent the opinions of my employer.
Technical Blog: http://www.rastergrid.com/blog/
Neither of those cases are very interesting, since the most common hardware setup is multiple CPU cores and a single GPU. Direct3D solves this by having multiple command lists for a single GPU. Isn't that feasible for OpenGL as well?
It doesn't matter if it's similar, because OpenGL doesn't have display lists.
And does the hardware actually map that to multiple parallel command streams as well? Multiple streams in software don't give you much gain if you are still forced to be sequential in hardware.Direct3D solves this by having multiple command lists for a single GPU.
A D3D deferred context is almost exactly as described - record API calls on a separate thread then play them back on the main thread. There's a very obvious case it targets and that's the case where the cost of making these API calls sequentially on the main thread outweighs the overhead of threading and of making two passes over each command (once to record and once to play back). It shouldn't be viewed as a "implement this and you'll be teh awesome" feature, rather, it needs careful profiling of your program and informed decision making. Implement it in a program that doesn't have the performance characteristics it targets and things will get worse.