GeForce second GPU usage

I have a PC with 2 Geforce GTX 660 GPUs. I want to use both GPUs for rendering of each frame. (similar to the old SFR SLI).

Directx allows me to use the second GPU via adapter enumurator. But in OpenGL i cannot found how to use second card without SLI. Regardless of the GPU usage, the second GPU is never used for rendering. We have tried opening several GPU load applications such as Furmark.

We have also tried monitor enumerator function in Windows API and forced the context creation to be in the second monitor which is connected to the second GPU. Yet, the rendering is still done in the first GPU.

NVIDIA slides suggest using the GPU-affinity function, but this functionality is only available in Quadro graphics cards.

How can I use the second card for rendering in OpenGL?

I don’t believe you can use 2 nVidia cards without SLI or having a Quadro card in the mix

On Linux you can easily. I guess this is just a limitation on the Windows side.

How can we use second GPU in Linux? Does nvidia driver distribute the jobs regardless of our context or we do something special when creating context in order to be able to use second GPU?

The latter (since you’re presuming no SLI). You designate which GPU you’re talking to.

Thanks for your replies. I found that via XOpenDisplay we can select GPU we want in Linux. However, in Windows EnumMonitor doesn’t do the same thing. The only way for windows seems affinity but it is only available in Quadro cards not GeForce.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.