Explicitly using multiple GPUs.

Currently, major graphics manufacturers provide multi-GPU solutions (2 GPU solutions from both ATI and NVIDIA and QuadSLI 4 GPU solutions from NVIDIA). Nowadays, they come with some set of predefined modes (i.e. scissors, chessboard, antialiasing, alternate frame) of rendering, and cannot be used explicitly.

Explicit use of multiple GPUs may be required for GPGPU applications, which perform multi-pass computations and use depth (or stencil) buffers to conditionally cull some computations. GPGPU computations may involve accessing rendered textures (i.e. textures which are rendered into) with random (not interpolated) texture coordinates, so that only the application knows how the texture needs to be distributed (probably, with some regions being present at more than one card) between GPUs to avoid unnecessary data transfers between cards.

Another instance of where explicit use of multiple GPUs may be handy is occlusion queries. If there are any occlusion queries present, alternate frame mode loses sense, as occlusion queries usually logically (and some times explicitly - with glFinish() command) require all the previous rendering to be finished. However, with explicit using of multiple GPUs it may be possible to start 2 threads on CPU, each of which would use a separate GPU. Alternate frames will be rendered on alternate CPU threads (and alternate GPUs), thus increasing GPU efficiency.

Explicit use of multiple GPUs may be required for GPGPU applications
You can stop right there.

When you’re talking about GPGPU, you’re talking about using a graphics chip to do non-graphics related tasks. More power to you in that department, but I don’t want the OpenGL API modified just to provide a feature for using a piece of hardware for a purpose that was it was never intended to do.

You can do everything you propose today, just by creating two contexts.

“You can do everything you propose today, just by creating two contexts.”

Depends on how SLI works. If you create 2 context each attached to a window and rendering in each window, will 2 GPUs be used for one windows and the other 2 GPUs for the other?

Extensions could be created for explicite control.

“You can do everything you propose today, just by creating two contexts.”
What about sharing textures between two different contexts? Can two contexts access the same texture (without memory and other overhead, except for the case when the entire texture is really needed for both cards)? And what about broadcasting rendered texture data from one context to both of them? Can it be done via a link between GPUs, or should it then be done by a readback-and-writeback operation?

You can just share textures between contexts. How exactly they are going to be transferred to the other GPU is an implementation detail, the driver should know best how to do it. Such things should not be exposed by the API.

2 GPUs (NON-SLI):-

Create 2 contexts and attach them to the corresponding window. You can share any “objects” ex;- texture, DS, fbo, pbo etc… but not the actual data (texture, DS etc…)

If you have one big texture that you want to use on both the GPUs, you just have to load it onto both of them but you can share the texture object (i.e texture will have the same name on both the GPUs).

Note that you may have to upload the same resource to both GPUs if you’re taking the two-context approach, but that’s what the driver is doing anyway. One GPU can’t sample from a texture in the other GPU’s vram, with or without SLI/Crossfire.

wglShareLists anyone? :wink: