PDA

View Full Version : Multiple GPU's seen as one device?



Rennie Johnson
08-29-2009, 02:33 PM
Oh great OpenGL gurus:

I have an OpenGL imaging app written on an 8800GTS 640. If I buy a GTX 295 and set the NVidia Control Panel to Multi GPU on a single monitor (still using XP), will my application see the two GPU's as a single GPU with twice as many cores? If not, is there a way to achieve this without invoking GPU Affinity and having multiple GPU device contexts?

Thanks,
Rennie

Dark Photon
08-30-2009, 05:59 PM
I have an OpenGL imaging app written on an 8800GTS 640. If I buy a GTX 295 and set the NVidia Control Panel to Multi GPU on a single monitor (still using XP), will my application see the two GPU's as a single GPU with twice as many cores? If not, is there a way to achieve this without invoking GPU Affinity and having multiple GPU device contexts?
Look for how to enable AA (aka SLI-AA) or SFR multi-GPU mode.

I don't know if the drivers let you, but that's probably what you want. NVidia's GTX 295 info page only promotes AFR, so they may not let you. However, the NVidia driver docs still mention all 3 so it may still be supported... Let us know!

SLI-AA is/was useful when you wanted the cards to split the load of rasterizing different samples within a pixel (for antialiasing modes). SFR is/was for letting the GPUs split the screen the screen spatially and let each rendering its own part separately. In both cases, the results was merged int the end. AFR won't help you much unless you can tolerate the increased latency. Instead of having one GPU generate every frame, you have each GPU generate every Nth frame.

From what I've read, GTX 295 is like what NVidia used to call a GX2 GPU. They've merged two GTX260 GPUs into one with a single front-end.

Rennie Johnson
08-31-2009, 06:32 PM
Dark Photon:

Thanks for the head start. I'll see what I can learn. My initial thought would be that splitting the frame into two parts to be rendered on two separate GPU's would present a problem for convolution filtering, unless each GPU is getting access to the entire source texture images, but only rendering final pixels to a partial frame. Rewriting the code for GPU Affinity and scheduling alternate frame rendering would be a significant rewrite, as I've got textures and shaders up the butt in this app.

Does OpenGL provide a function for checking to see when a drawing pass with a lot of shader activity is complete? If I'm drawing to two frames, I'll have to implement some threading and completeness checking before allowing future frames to be rendered.

Thanks,
Rennie

Dark Photon
09-01-2009, 08:20 AM
Does OpenGL provide a function for checking to see when a drawing pass with a lot of shader activity is complete? If I'm drawing to two frames, I'll have to implement some threading and completeness checking before allowing future frames to be rendered.
Check out ARB_sync (http://www.opengl.org/registry/specs/ARB/sync.txt), or NV_fence (http://www.opengl.org/registry/specs/NV/fence.txt) for an NVidia-only solution on older drivers. If need cross-vendor support on older drivers, might be able to get what you want from ARB_occlusion_query (http://www.opengl.org/registry/specs/ARB/occlusion_query.txt). And of course, for initial testing there's always the sledgehammer glFinish().

Rennie Johnson
09-03-2009, 10:54 PM
Dark Photon:

I'm fine with NVidia only. I'm writing for the Quadro cards with SDI output, although I'm supporting Geforce cards with SDI output on AJA Kona cards for the financially challenged users.

Is there any GPU affinity sample code that demonstrates even/odd simultaneous frame rendering? Without disclosing being totally in over my head, a few questions:

1) Do I need separate host threads to render on each GPU simultaneously?
2) Do I need separate OpenGL rendering contexts for each GPU, or is that handled by switching between affinity contexts with glMakeCurrent(affinity context)?

Thanks,
Rennie