PDA

View Full Version : Select GPU



santyhamer
05-23-2010, 04:16 PM
Hi,

imagine I have 3 cards without SLI. I want to render to a texture using the card #2 which is not attached to a monitor. After the texture is rendered, I'm gonna retrieve the data for the CPU via PCI-Express.

Any idea how to do that, pls?

Thanks.

mhagain
05-23-2010, 04:54 PM
I don't think OpenGL lets you select which adapter to use. Someone more knowledgable would need to confirm this, but I strongly suspect that you would need to use D3D, which does.

santyhamer
05-23-2010, 10:41 PM
I strongly suspect that you would need to use D3D, which does.
Yep, I need something like the ID3D10Device/ IDXGIFactory/IDXGIAdapter/IDXGIOutput so I can select what GPU to use to render, even when the monitor is not plugged into the card.

Btw, I've not mentioned it before... but this is for GPGPU purposes ( I need to rasterise some primitives and then perform interop with OpenCL ).

I currently cannot use DirectX because the application must be multiplatform(linux,macOSX,etc...)

I've seen the WGL_NV_Gpu_affinity extension
http://www.opengl.org/registry/specs/NV/gpu_affinity.txt

and, for ATI, the WGL_AMD_gpu_association
http://www.opengl.org/registry/specs/AMD/wgl_gpu_association.txt

But, ideally, I would like to use an ARB's one...

Dark Photon
05-24-2010, 05:29 AM
I don't think OpenGL lets you select which adapter to use.
Sure does. On Linux, one way is to just set up a separate X screen per GPU. Then create a GL context per screen. Which GPU you render to is determined by which GL context you have active when you issue the GL commands. It's easy.

Can't speak for Windows, but I too have heard of the NV_gpu_affinity extension which sounds like what you want, though not cross-vendor.

V-man
05-24-2010, 07:39 AM
On windows, you make a window on the "other" monitor and then create the context. When you make the context, the GL driver knows which GPU to use. The gpus should be from the same company because there will be just 1 GL driver from that IHV.

skynet
05-24-2010, 09:33 AM
On windows, you make a window on the "other" monitor and then create the context. When you make the context, the GL driver knows which GPU to use.

Actually, this is not a good way to do MGPU stuff on Windows. Just moving the window to a specific monitor is not enough to tell the driver "I only want to render on this GPU", because you might move the window afterwards to any other monitor as well. This leads the driver into a path where either
a) you don't get anything rendered in that window or
b) the driver actually duplicates _every_ OpenGL resource on _every_ GPU. This can lead to very slow performance and even incorrect rendering, especially when doing offscreen rendering stuff.

To do specific things on a specific GPU, NV_gpu_affinity and AMD_gpu_association are the only way to go on Windows.

santyhamer
05-25-2010, 11:54 AM
On windows, you make a window on the "other" monitor and then create the context. When you make the context, the GL driver knows which GPU to use.
There is a problem with that.... I could have not have a monitor attached to the GPU. I'm using the GPUs for GPGPU, not really to display.