I have a PC with 2 Geforce GTX 660 GPUs. I want to use both GPUs for rendering of each frame. (similar to the old SFR SLI).

Directx allows me to use the second GPU via adapter enumurator. But in OpenGL i cannot found how to use second card without SLI. Regardless of the GPU usage, the second GPU is never used for rendering. We have tried opening several GPU load applications such as Furmark.

We have also tried monitor enumerator function in Windows API and forced the context creation to be in the second monitor which is connected to the second GPU. Yet, the rendering is still done in the first GPU.

NVIDIA slides suggest using the GPU-affinity function, but this functionality is only available in Quadro graphics cards.

How can I use the second card for rendering in OpenGL?