Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 4 of 4

Thread: Nvidia's GPU affinity extension is a complete failure

  1. #1
    Junior Member Regular Contributor
    Join Date
    Mar 2009
    Location
    California
    Posts
    188

    Nvidia's GPU affinity extension is a complete failure

    I can't believe how murky and complicated this spec is:
    http://developer.download.nvidia.com...u_affinity.txt

    All I need is a setting to indicate "don't use the damn Intel integrated chip".

  2. #2
    Advanced Member Frequent Contributor
    Join Date
    Dec 2007
    Location
    Hungary
    Posts
    985
    AFAIK, NV_gpu_affinity is only applicable in the context of multiple NVIDIA GPUs, it has nothing to do with mixed vendor setups, like that of an integrated Intel + NVIDIA discrete GPU.
    Disclaimer: This is my personal profile. Whatever I write here is my personal opinion and none of my statements or speculations are anyhow related to my employer and as such should not be treated as accurate or valid and in no case should those be considered to represent the opinions of my employer.
    Technical Blog: http://www.rastergrid.com/blog/

  3. #3
    Member Regular Contributor
    Join Date
    Mar 2001
    Posts
    468
    Check page 3 of the Optimus Renderint Policies guide at https://developer.nvidia.com/optimus to see how to enable the discrete GPU for you app.

  4. #4
    Junior Member Regular Contributor
    Join Date
    Mar 2009
    Location
    California
    Posts
    188
    According to that document you can do this:

    Starting with the Release 302 drivers, application developers can direct the Optimusdriver at runtime to use the High Performance Graphics to render any application–eventhose applications for which there is no existing application profile. They can do this byexporting a global variable named NvOptimusEnablement. The Optimus driver looks forthe existence and value of the export. Only the LSB of the DWORD matters at this time. Avalue of 0x00000001 indicates that rendering should be performed using HighPerformance Graphics. A value of 0x00000000 indicates that this method should beignored.
    Code :
    extern "C" {
    _declspec(dllexport) DWORD NvOptimusEnablement = 0x00000001;
    }

    Is there something equivalent for AMD cards?

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •