Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 9 of 9

Thread: GPU memory

  1. #1
    Junior Member Newbie
    Join Date
    Sep 2013
    Posts
    16

    GPU memory

    Hello
    If I try to get GPU memory for VBO, do I need to consider the case when the GPU is out of memory or glGetBuffers/glBindBuffer/glBufferData may use main memory if the GPU is OOM, so this is transparent for the user?
    Thanks
    Cathy L.

  2. #2
    Senior Member OpenGL Pro Aleksandar's Avatar
    Join Date
    Jul 2009
    Posts
    1,162
    If your VBO can fit into the GPU memory (i.e. dedicated graphics memory), then GL driver will (transparently) evict memory blocks to make space for it.
    But if it is not the case NV drivers will not, but AMD will try to render it directly from the main memory (i.e. shared system memory).

  3. #3
    Junior Member Newbie
    Join Date
    Sep 2013
    Posts
    16
    Quote Originally Posted by Aleksandar View Post
    But if it is not the case NV drivers will not, but AMD will try to render it directly from the main memory (i.e. shared system memory).
    Ok that is what I wanted to know: so for the moment not all the drivers will switch to main memory with gl(xxxxx)buffer calls if not enough dedicated graphics memory is available. So every program needs to check glGetError after glBufferData and if the result is GL_OUT_OF_MEMORY, it needs to malloc some space for a vertex array, is that the right method?

    Thanks a lot for your help

    Cathy L.

  4. #4
    Senior Member OpenGL Pro Aleksandar's Avatar
    Join Date
    Jul 2009
    Posts
    1,162
    Quote Originally Posted by zedrummer View Post
    So every program needs to check glGetError after glBufferData and if the result is GL_OUT_OF_MEMORY, it needs to malloc some space for a vertex array, is that the right method?
    No, it is not a right method. You should check the amount of dedicated graphics memory before trying to allocate any. In any case it is a wrong strategy to try to allocate a giantic VBO at all. I tried to point it out at some other posts. There are other objects in graphics memory; not just that VBO. If there is no space for all, an eviction occurs. Swapping objects is an "expensive sport" which should be avoid at all prices.

  5. #5
    Advanced Member Frequent Contributor arekkusu's Avatar
    Join Date
    Nov 2003
    Posts
    783
    It's not really any different from any memory management on the CPU.

    If you have an huge amount of stuff you want to allocate, it will fail or be slow on some user's machine with "not enough" RAM.
    If you are writing an app that has to run on a range of machines, you better figure out a strategy to query the machine's environment, and then tailor your resource usage to that environment.

    When the range of machine configurations is huge, the strategy usually turns into the "quality" sliders you see in most PC games, adjusting texture resolution, model detail, draw distance, etc.

  6. #6
    Senior Member OpenGL Guru Dark Photon's Avatar
    Join Date
    Oct 2004
    Location
    Druidia
    Posts
    3,224
    Quote Originally Posted by arekkusu View Post
    It's not really any different from any memory management on the CPU.
    Yes, largely. The exception being that in some cases the GPU might be able to directly access CPU memory (e.g. with pinned CPU memory aka page-locked memory). For instance, search for "pinned" in this chapter.

  7. #7
    Senior Member OpenGL Pro
    Join Date
    Jan 2007
    Posts
    1,217
    Querying dedicated GPU memory ... I'm more inclined to say "don't do that". GPUs with shared memory won't give you a good result there (assuming they even expose an extension for querying), and Intel are fast becoming a strong player in the graphics arena; the days when you used to be able to say "forget about Intel" are in danger of being in the past.

    I believe that it's perfectly OK to set a minimum specification and say "this is what I require for my program", then design your content around that. Be realistic and accept that "running on all hardware" is not an achievable goal - as soon as you write your first fragment shader you've left that objective behind (yes, there are still some GeForce 4 MXs around, and you're going to encounter machines with anything from 64MB to gigabytes of dedicated video RAM, even today; the full gamut is just too broad). Set a reasonable baseline, design your content to fit in the video RAM for that baseline, and if you really want to you can add an optional high quality mode for those with more video RAM or better capabilities.

  8. #8
    Advanced Member Frequent Contributor arekkusu's Avatar
    Join Date
    Nov 2003
    Posts
    783
    Yes, a minimum spec + quality slider is a common strategy.

    But to figure out a minimum spec, you still need to "query the environment"... that is, survey your customer install base and see what kinds of machines they are running. Else you risk excluding a percentage of potential customers. The query just moves from a run-time to a design-time decision.

  9. #9
    Junior Member Newbie
    Join Date
    Sep 2013
    Posts
    16
    Ok thank you all, that is clear. I'm gonna google about adapting content with computer specs.
    Cathy L.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •