Alright. After days of trying, I’d like to ask for practical methods of getting dedicated VRam programmatically (C++ preferred).
The only way that works for us so far (most times) is to use IDxDiagContainer’s “szDisplayMemoryEnglish” property. But it doesn’t work sometimes. For example, it gives total vram (dedicated + shared) as dedicated for some cards (cannot remember the exact model). And just today, it gives “N/A” for an Intel 82915G chip which should have 128MB VRam.
Suggestions/ideas of any kind will be appreciated:)
For such an intel chip, I thought it actually have no dedicated memory, it is all taken dynamically from main RAM.
Ask yourself why do you need to know the amount of VRAM : to size the textures so that performance is acceptable ?
In this case I believe the best is to benchmark, for each texture size, and through dichotomy select the highest texture level that still keeps acceptable performance (and no GL error).
For Windows, I recommend reading the following document to know what is reported by the OS as graphics memory. It depends both on discrete vs integrated graphics adapter and Vista vs before Vista:
On Linux, the only thing I found is nVidia specific with the NV-CONTROL X server extension and the NV_CTRL_VIDEO_RAM attribute. The API (NVCtrlLib.h and NVCtrl.h) is provided in nvidia-settings
i remember a sample from Nvidia giving video ram size like MalcolmB posted above. also i thought one can do this with allocating textures taking GL_MAX_TEXTURE_SIZE doing it in a loop till it fails would give some hints on the video memory (free textures when done). would be far from accurate but something. if textures are a problem its better to use texture compression. the visual quality is then another issue.
In fact, there are 5 methods mentioned in http://msdn.microsoft.com/en-us/library/cc308070(VS.85).aspx (Thanks to “overlay”).
GetVideoMemoryViaDirectDraw;
GetVideoMemoryViaWMI;
GetVideoMemoryViaDxDiag;
GetVideoMemoryViaD3D9;
GetVideoMemoryViaDXGI.