Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 4 of 4

Thread: Splitting fragment shader work across multiple nvidia gpus / SLI question

  1. #1
    Junior Member Newbie
    Join Date
    Mar 2008
    Posts
    3

    Splitting fragment shader work across multiple nvidia gpus / SLI question

    I realize this is an Nvidia specific problem, but the developer forums are down at the moment so I thought I'd ask here, just in case anyone has had the same issue.

    I'm trying to modify a raytracer to take advantage of a multiple gpu setup (two Nvidia GTX 570s) in Windows. So far the problem has been convoluted by 1) the NV_gpu_affinity extension unavailable on consumer cards, and 2) the common Alternate Frame Rendering SLI mode being incompatible with the rendering technique used (each frame builds on the result of the previous frame, thus rendering two frames simultaneously is not an option). The obvious approach given the constraints would be to split the work so that one GPU works on a percentage of the image while the other tackles the remaining percentage.

    As far as I'm aware, this means that my only option is Split Frame Rendering mode with SLI. This should solve the problem, though it doesn't appear to be working correctly / as I imagined. I can force Split Frame Rendering through the Nvidia control panel (actually, with Nvidia Inspector as the SFR option is no longer available in Nvidias own control panel). This results in both cards being fully utilized when rendering, however there is no increase in frame rate (either windowed or fullscreen). Also, the onscreen SLI indicator only displays when double buffering is enabled - unfortunate as I need access to the backbuffer, thus I have to create a FBO which acts as one and disable double buffering. With double buffering enabled, the SLI indicator displays and shows a horizontal line to indicate the split in the workload, though it traverses pretty quickly to the bottom of the image, so it's probably not working as intended.

    I can't find much documentation on how to optimize for SFR, and any I can isn't really applicable to my problem, so I can't be sure if there's something I'm missing. Does SFR even take fragment shaders into account (I'm rendering onto a fullscreen quad so there's very little direct geometry work going on)? Seeing as the SLI indicator only shows when double buffering is enabled, does that mean the SFR process is somehow tied to the backbuffer? Can anyone shed any light on this?
    Last edited by chameleon789; 10-07-2012 at 02:09 PM.

  2. #2
    Senior Member OpenGL Guru Dark Photon's Avatar
    Join Date
    Oct 2004
    Location
    Druidia
    Posts
    3,188
    Quote Originally Posted by chameleon789 View Post
    I'm trying to modify a raytracer to take advantage of a multiple gpu setup (two Nvidia GTX 570s) in Windows. ... The obvious approach given the constraints would be to split the work so that one GPU works on a percentage of the image while the other tackles the remaining percentage.

    As far as I'm aware, this means that my only option is Split Frame Rendering mode with SLI.
    Well, I don't know whether there's an analog in Windows, but I can tell you that you have much better options than this in Linux with the NVidia drivers. For example, just set up an X window system config that puts each GPU on its own X screen (e.g. :0.0 and :0.1), create separate processes or threads each with their own GL context pointing to their own X screen (i.e. own GPU), and just rip pixels like mad. Per NVidia and experience, there is almost no overhead in doing this. No SLI required. Heck, IIRC nvidia-xconfig will even set you up with this without any config file hacking (-a option).

  3. #3
    Junior Member Newbie
    Join Date
    Mar 2008
    Posts
    3
    Quote Originally Posted by Dark Photon View Post
    Well, I don't know whether there's an analog in Windows, but I can tell you that you have much better options than this in Linux with the NVidia drivers. For example, just set up an X window system config that puts each GPU on its own X screen (e.g. :0.0 and :0.1), create separate processes or threads each with their own GL context pointing to their own X screen (i.e. own GPU), and just rip pixels like mad. Per NVidia and experience, there is almost no overhead in doing this. No SLI required. Heck, IIRC nvidia-xconfig will even set you up with this without any config file hacking (-a option).
    Thanks. Luckily the software I'm using is open source and cross platform, so I managed to take your advice, installed a Linux distro and got this working. Still not perfect (it's more than a little convoluted creating and accessing device specific GL contexts from the same process, especially when using a cross platform GL framework), but I'm hacking together something that makes use of them both regardless and have already gotten some decent results. Running into some other problems now... seems like I'm cpu bound as each instance of the program is fully utilizing a single core, but at least I have some leads to follow on how to fix that one. Thanks again.

  4. #4
    Senior Member OpenGL Guru Dark Photon's Avatar
    Join Date
    Oct 2004
    Location
    Druidia
    Posts
    3,188
    Sure thing. Let me know if you need any help.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •