Selecting specific GPU for OpenGL rendering when different GPUs exist

I develop an application that uses OpenGL for the display rendering but also contains a GPU-based CUDA renderer. My current system has both an NVidia Quadro card and an AMD FirePro card. I’d like to dedicate the Quadro card to strictly CUDA based rendering and the AMD card to strictly OpenGL based rendering.

I have looked extensively into the GPU specific extensions for both NVidia and AMD, but it doesn’t look like there is a way to mix and match different vendor GPUs in the same process. In order to get the appropriate WGL extensions, a context first needs to be created…however, once that is done, it seems your process is now locked in to using the driver that’s assigned to the “primary” display. For example: If my primary display is attached to the Quadro card, then I can only get handles to the NVidia extensions (i.e. WGL_NV_gpu_affinity) and all attempts to get AMD’s extensions fail (return NULL). If I then make my primary display the one using the FirePro, then the reverse is true…I can get all AMD extensions (i.e. WGL_AMD_gpu_association), but none of the NVidia extensions.

Obviously if I’m only using Quadro cards or only FirePros, then I can dedicate each GPU accordingly…but how can I do this when I have different GPUs from different vendors?

Thanks,
-Kry

[QUOTE=Kryczeck;1263960]I develop an application that uses OpenGL for the display rendering but also contains a GPU-based CUDA renderer. My current system has both an NVidia Quadro card and an AMD FirePro card. I’d like to dedicate the Quadro card to strictly CUDA based rendering and the AMD card to strictly OpenGL based rendering.

I have looked extensively into the GPU specific extensions for both NVidia and AMD, but it doesn’t look like there is a way to mix and match different vendor GPUs in the same process. In order to get the appropriate WGL extensions, a context first needs to be created…however, once that is done, it seems your process is now locked in to using the driver that’s assigned to the “primary” display. For example: If my primary display is attached to the Quadro card, then I can only get handles to the NVidia extensions (i.e. WGL_NV_gpu_affinity) and all attempts to get AMD’s extensions fail (return NULL). If I then make my primary display the one using the FirePro, then the reverse is true…I can get all AMD extensions (i.e. WGL_AMD_gpu_association), but none of the NVidia extensions.

Obviously if I’m only using Quadro cards or only FirePros, then I can dedicate each GPU accordingly…but how can I do this when I have different GPUs from different vendors?

Thanks,
-Kry[/QUOTE]

I guess I asked a stupid question. ???

It’s not a stupid question so much as one that nobody knows the answer to. Though I’m not really sure I understand the nature of the problem. You seem to have a way to make your system use a particular GPU under OpenGL. Namely: make it the primary display. Presumably, AMD cards won’t/can’t answer to CUDA initialization, so naturally your CUDA commands can’t cause the AMD card to do anything.

So what precisely is the problem? Does CUDA not respond when the AMD card is the primary display?

I think that he want to can independantly select the GPU that handle the CUDA work on one side and independantly select the GPU that handle the display on the other side, with two NVIDIA cards
(or independantly select one GPU for OpenCL and the other GPU for to display the result with a mix of NVIDIA and AMD cards)

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.