Choosing between GPUs, even nvidia ones

Hi,

I discovered a way to trick the nvidia driver to use any chosen by you nvidia gpu from the available on the machine.
It works but unfortunately it has a brief visible to the user side effect.

choosing between any other combination of GPUs can be done perfectly without any drawbacks.
first force opengl32 to load the appropriate ICD - this is very easy:
HDC dc = CreateDCA("\\.\DISPLAY3", NULL, NULL, NULL); // substitute “\\.\DISPLAY3” with the device name you need
SetPixelFormat(dc, 1, NULL);
DeleteDC(dc);
at this point the ICD is loaded.
If there is only one GPU controlled by this ICD (read vendor) then you can just go ahead
with the normal opengl initialization - the GPU that will be used is the one corresponding to the display you specified (e.g. \\.\DISPLAY3)

however, if this ICD controls more than 1 GPU, you need to do some further work to choose between them.
On ATI you must create a window that is contained within some of the monitors attached to the GPU you are interested in,
and then setup the opengl context using it. This window can stay invisible, so there are no annoying side-effects here.
The ATI driver deduces which GPU to use by the window location.

Now the hardest part - if you have more than 1 nvidia.
Their driver does not provide any “normal” means to the programmer to choose which GPU to use.
But they always seem to use the windows’ primary display device.
So here is the trick - you can temporarily change the primary device with ChangeDisplaySettingsEx
then create context and then change the primary device back to what it was. The gl context, once created, will stay on the same gpu.
Changing the primary display with ChangeDisplaySettingsEx is a bit awkward, but it works - you must rearrange all displays by setting
their positions in the virtual desktop with flags CDS_UPDATEREGISTRY|CDS_NORESET, the new primary display must have also the flag CDS_SET_PRIMARY
AND must be at position (0,0), and finally call ChangeDisplaySettingsEx(0,0,0,0,0). You must make sure no 2 displays overlap, otherwise it wont work.
Use EnumDisplayMonitors to find all display devices that are part of the virtual screen.
Before you do all this, you may want to create a topmost window that covers the entire virtual screen
in order to hide the temporary shuffling from the user and present him with e.g. black screens instead.
Also, apparently there is a windows bug that causes any maximized windows to look strange after mode change - they look like
non-maximized but are actually maximized. I fix this by collecting all maximized windows in a list (EnumWindows), and after the above operation is done,
call ShowWindow(w, SW_RESTORE) AND ShowWindow(w, SW_MAXIMIZE) for all of them - this fixes them. Finally i destroy the window that covers the entire virtual screen.

How can you tell given device to which vendor belongs - from EnumDisplayDevices you can get the vendor ID of the device (NVIDIA = 0x10DE, ATI = 0x1002, intel = 0x8086)

Edit:
It appears the “side effect” from ChangeDisplaySettingsEx is worse than what I thought. After the display devices are re-aranged the various currently visible windows seem to get moved around between the displays in totally unpredictable way. On which display will each particular window end up is totally random, and different random on every try. Microsoft seem to have done quite nice mess with this. The same happens when you change the primary display from the windows control panel dialog, so this is a more general misbehavior of the os, not a problem with our concrete usage. The situation appears to be worse when composition is enabled.
Still, if you don’t mind this mess, the GPU choosing works fine.

Edit2:
I found better way to trick nvidia driver, without any side-effects:
they detect which is the primary display by checking for which monitor GetMonitorInfo will give position (0,0)
So all need to do is just hook GetMonitorInfo to return pos (0,0) for our chosen device and it works.
They do the detection only once, and after that you are stuck with the same device for the lifetime of the process.



static char chosen_dev[32];
static HMONITOR chosen_mon;
static int chosen_x, chosen_y;

static BOOL WINAPI get_monitor_info_a_rep(HMONITOR mon, MONITORINFO *mi)
{
    if (!GetMonitorInfoA(mon, mi)) return FALSE;
    if (mon == chosen_mon) mi->dwFlags |= MONITORINFOF_PRIMARY;
    else mi->dwFlags &= ~MONITORINFOF_PRIMARY;
    mi->rcMonitor.left -= chosen_x;
    mi->rcMonitor.right -= chosen_x;
    mi->rcMonitor.top -= chosen_y;
    mi->rcMonitor.bottom -= chosen_y;
    mi->rcWork.left -= chosen_x;
    mi->rcWork.right -= chosen_x;
    mi->rcWork.top -= chosen_y;
    mi->rcWork.bottom -= chosen_y;
    return TRUE;
}

static BOOL CALLBACK enum_mon_proc(HMONITOR mon, HDC dc, RECT *rc, LPARAM data)
{
    MONITORINFOEXA mi;
    mi.cbSize = sizeof(mi);
    GetMonitorInfoA(mon, (MONITORINFO *)&mi);
    if (strcmp(mi.szDevice, chosen_dev)) return TRUE;
    chosen_mon = mon;
    chosen_x = mi.rcMonitor.left;
    chosen_y = mi.rcMonitor.top;
    return FALSE;
}

// to initialize the nvidia driver for using specific device use set_nvidia_gl_device_to_use
// after that you can initialize your context as usual - create window, get dc, set pixel format, create context, make current
// it will use the selected device
// note that this is only for nvidia. for AMD the device choice is made differently - by creating a window that is located within the device's screen rect

// the install_hook is not included because it is lengthy, you can implement it yourself with certain amount of research and looking in the internet - it replaces
// the given dll entry with your function in the dll tables of both the dll that exports it and all other dlls that use it
static void set_nvidia_gl_device_to_use(const char *dev)
{
    HDC dc;

    // find which is our monitor and get it's position
    strcpy(chosen_dev, dev);
    EnumDisplayMonitors(NULL, NULL, enum_mon_proc, 0);

    // hook GetModuleHAndle to return fake info to trick the nvidia driver
    install_hook(get_module_base("user32.dll"), "GetMonitorInfoA", get_monitor_info_a_rep);

    // force opengl32 to load the ICD for our device AND force the nvidia ICD to choose our device
    dc = CreateDCA(chosen_dev, NULL, NULL, NULL);
    SetPixelFormat(dc, 1, NULL);
    DeleteDC(dc);

    // revert the hook
    install_hook(get_module_base("user32.dll"), "GetMonitorInfoA", GetMonitorInfoA);
}


1 Like

You should nag NVIDIA into exposing WGL_NV_gpu_affinity on consumer cards.

AMD exposes a similar WGL_AMD_gpu_association on consumer cards.

I guess it is a matter of policy for them and they wont do it. Are they are really trying to promote their expensive quadro line with this? That seems silly to me, but what do i know.

I find it silly to do that at the expense of opengl, given that d3d allows programmers to choose device.
Its not like opengl is extremely popular on PC anyway so that we shall cut basic features from it to promote something else.

[QUOTE=Leith Bade;1244587]You should nag NVIDIA into exposing WGL_NV_gpu_affinity on consumer cards.

AMD exposes a similar WGL_AMD_gpu_association on consumer cards.[/QUOTE]

The gpu association is not needed for on-screen windows on AMD cards. The driver automatically selects the GPU which ‘has the most pixels’ at window creation time. It’s only useful for off-screen FBOs.

I wouldn’t hold my breath on getting gpu affinity on Geforce.

More details here: Equalizer: Documentation: Parallel OpenGL FAQ

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.