How to query the video card memory?

Hi.
I want to set default graphics settings in my application according to the video card memory on the target system.
I have googled this already and found some functions which work on either NV or ATI grahpics cards via extensions. However, these do not seem to always work correctly and I was thinking they might be deprecated.
So I would like to ask here:

  1. Is there a reliable way of checking video card memory on any card, regardless of manufacturer?
  2. Is there a way to check the video card memory that is (un)used at the moment?

Thank you in advance.

  1. is not defined by OpenGL, so the answer is OS or vendor-specific. For example on Mac OS X you can do it like this on every vendor:
CGLDescribeRenderer(RendererInfo, RendererIndex, kCGLRPTextureMemoryMegabytes, &VRAM);

…but what does this really mean? It tells you how much dedicated video memory is available to your application, but because modern OSes virtualize video memory, this number is only a loose guideline. You can allocate and use as much memory as you feel comfortable paging to disk (i.e. terabytes), but stay within the guideline for “better performance”.

  1. is really a meaningless question unless you’re developing for a game console. In modern OSes virtualized memory is shared by every application running, and by the window server. So even if you could query this (and you can, at least on OS X if you talk to IOKit) the number could change in the next nanosecond because another app decided to draw, paging more resources around.

[QUOTE=arekkusu;1255531]1) is not defined by OpenGL, so the answer is OS or vendor-specific. For example on Mac OS X you can do it like this on every vendor:

CGLDescribeRenderer(RendererInfo, RendererIndex, kCGLRPTextureMemoryMegabytes, &VRAM);

…but what does this really mean? It tells you how much dedicated video memory is available to your application, but because modern OSes virtualize video memory, this number is only a loose guideline. You can allocate and use as much memory as you feel comfortable paging to disk (i.e. terabytes), but stay within the guideline for “better performance”.[/QUOTE]
I see. So I would have to implement a method for each operating system I want my application to run on?

I would like to do this in order to have an easy way to find out how much memory my application needs. I would only run it on my own personal computer with not much else running in the background except for operating system processes.
It isnt incredibly important to me, but it would have been nice to have.

Thank you though for your answer.

[QUOTE=Cornix;1255532]I see. So I would have to implement a method for each operating system I want my application to run on?[/QUOTE]I would suggest to use GL extensions for NV and AMD, since vendors guarantee they work on all platforms (AMD’s extension does not work “out of the process”, but it is probably not important in your case). For Intel I’m not sure what to advice. [QUOTE=Cornix;1255532]It isnt incredibly important to me, but it would have been nice to have.[/QUOTE]Although memory management is a very dynamic process, and values you can read are probably not correct at the moment you are processing them, it is quite useful to have insight what is going on. Especially it is important to notice when eviction count increases, since it would have impact on the overall performance.

Absolutely. But some of those insights are best made by using debugging tools, not with programmatic run-time queries.

For example (again, on Mac OS X) you can use OpenGL Driver Monitor to view Current Free Video Memory, and page-offs (evictions) in thrashing situations due to resource over-commits, and many other stats in the system-wide (not app-specific) view of the driver.

Yes, you are probably right.

Thank you both.

I’ve already given my own opinions on this approach here, so I’ll refer you to that. Also note [b]arekkusu[/b]'s follow-up.