GPU memory

Hello
If I try to get GPU memory for VBO, do I need to consider the case when the GPU is out of memory or glGetBuffers/glBindBuffer/glBufferData may use main memory if the GPU is OOM, so this is transparent for the user?
Thanks
Cathy L.

If your VBO can fit into the GPU memory (i.e. dedicated graphics memory), then GL driver will (transparently) evict memory blocks to make space for it.
But if it is not the case NV drivers will not, but AMD will try to render it directly from the main memory (i.e. shared system memory).

[QUOTE=Aleksandar;1255501]
But if it is not the case NV drivers will not, but AMD will try to render it directly from the main memory (i.e. shared system memory).[/QUOTE]

Ok that is what I wanted to know: so for the moment not all the drivers will switch to main memory with gl(xxxxx)buffer calls if not enough dedicated graphics memory is available. So every program needs to check glGetError after glBufferData and if the result is GL_OUT_OF_MEMORY, it needs to malloc some space for a vertex array, is that the right method?

Thanks a lot for your help

Cathy L.

No, it is not a right method. You should check the amount of dedicated graphics memory before trying to allocate any. In any case it is a wrong strategy to try to allocate a giantic VBO at all. I tried to point it out at some other posts. There are other objects in graphics memory; not just that VBO. If there is no space for all, an eviction occurs. Swapping objects is an “expensive sport” which should be avoid at all prices.

It’s not really any different from any memory management on the CPU.

If you have an huge amount of stuff you want to allocate, it will fail or be slow on some user’s machine with “not enough” RAM.
If you are writing an app that has to run on a range of machines, you better figure out a strategy to query the machine’s environment, and then tailor your resource usage to that environment.

When the range of machine configurations is huge, the strategy usually turns into the “quality” sliders you see in most PC games, adjusting texture resolution, model detail, draw distance, etc.

Yes, largely. The exception being that in some cases the GPU might be able to directly access CPU memory (e.g. with pinned CPU memory aka page-locked memory). For instance, search for “pinned” in this chapter.

Querying dedicated GPU memory … I’m more inclined to say “don’t do that”. GPUs with shared memory won’t give you a good result there (assuming they even expose an extension for querying), and Intel are fast becoming a strong player in the graphics arena; the days when you used to be able to say “forget about Intel” are in danger of being in the past.

I believe that it’s perfectly OK to set a minimum specification and say “this is what I require for my program”, then design your content around that. Be realistic and accept that “running on all hardware” is not an achievable goal - as soon as you write your first fragment shader you’ve left that objective behind (yes, there are still some GeForce 4 MXs around, and you’re going to encounter machines with anything from 64MB to gigabytes of dedicated video RAM, even today; the full gamut is just too broad). Set a reasonable baseline, design your content to fit in the video RAM for that baseline, and if you really want to you can add an optional high quality mode for those with more video RAM or better capabilities.

Yes, a minimum spec + quality slider is a common strategy.

But to figure out a minimum spec, you still need to “query the environment”… that is, survey your customer install base and see what kinds of machines they are running. Else you risk excluding a percentage of potential customers. The query just moves from a run-time to a design-time decision.

Ok thank you all, that is clear. I’m gonna google about adapting content with computer specs.
Cathy L.