View Full Version : Selecting Video Card
05-31-2012, 10:05 AM
How to tell OpenGL which video card to use in a multi-GPU system?
05-31-2012, 11:51 AM
That depends on your window toolkit and OS as OpenGL does not specify how the context gets created. On linux you can for example run one instances of Xorg on each GPU and use the $DISPLAY variable to control which GPU to use (with some restrictions).
05-31-2012, 12:26 PM
It will use the "primary" I believe - meaning which ever card has the monitor attached.
05-31-2012, 01:27 PM
One application I can think of is dispatching commands to appropriate card. For instance, lets say I have two cards installed, one NVIDIA and the other is AMD. Is there a way I can virtualize both cards to OpenGL so that I can make use of the union of all extensions supported by both cards? In this case OpenGL sees only one card with both capabilities. Another way is switching cards at run-time, at a certain rendering stage I switch to one card to do some work the other cannot do, and vice versa. If it's not yet in current OpenGL then this would be a nice feature to have.
05-31-2012, 02:15 PM
Again, GPU selection is not an OpenGL feature but part of the window toolkit (or whatever creates cour OpenGL context).
mark ds is right, you will need to have a monitor attached. You can attach one per graphics card on linux and start two xserver and then select the GPU to run on (via the DISPLAY variable which is per default the desktop you start your app on). Not sure how that works on other OSes.
But: your app will always only run on one card (to be precise: each OpenGL context is bound to one GPU), you wont get a 'magical union card'. Not sure how well two different cards work in one PC, I only made experiments with two nvidias on one linux machine (with 2 xservers, starting the apps via the ssh and selected a GPU).
05-31-2012, 02:50 PM
Ok think about it as an application running on a multi-core CPU. The application does not need to explicitly state the cores but it can utilizes both by means of threading. You specify a thread but you don't have to specify the core. This way the application sees one more capable processor. But I think I see the problem, the feature is in the hardware but not yet in software. Otherwise how would OpenCL function? :)
This can be achieved by means of virtualization, either at the OS level or OpenGL driver.
05-31-2012, 02:59 PM
Oh, i think it could be possible to write drivers and an API that lets you explicitly select the GPu and even move work from one to the other (while that would be more expensive as moving CPU threads around as GPUs have tons of local memory the app might want to access...). But the bottom line is: there is no such API for OpenGL (at least I'm not aware of a such flexible or even transparent solution) and it wouldn't be a part of the OpenGL spec itself, so not a suggestion for a newer OGL release.
There is quite some misinformation in this thread. It highly depends on your OS, and in the case of Windows, on your driver vendor. I've written all that stuff up years ago: http://www.equalizergraphics.com/documentation/parallelOpenGLFAQ.html#multigpu
GL contexts behave definitely not like CPU threads. There are APIs to select GPUs. Only OS X can move a context from one GPU to another. You don't have to have a monitor attached in all cases. OpenGL does not unify the feature set of different cards (with the exception of OS X).
Powered by vBulletin® Version 4.2.3 Copyright © 2016 vBulletin Solutions, Inc. All rights reserved.