PDA

View Full Version : Get amount of graphics memory



Adrenalin
08-17-2008, 07:31 AM
Hello,
how do I get the amount of graphics memory and the name of the graphics card inside a program that uses opengl, glut, glew and cg? Is there any possibilty to query this sort of data using Windows AND Linux?

overlay
08-17-2008, 02:21 PM
For the name, you can use
GLubyte *vendor=glGetString(GL_VENDOR);
GLubyte *renderer=glGetString(GL_RENDERER);
GLubyte *version=glGetString(GL_VERSION);

For instance:

vendor="NVIDIA Corporation"
renderer="GeForce 6800/PCI/SSE2"
version="2.1.2 NVIDIA 169.12"

For the amount of memory, there is no OpenGL call that I'm aware of.
I haven't seen anything new to address this issue in the new OpenGL 3.0 spec as well.

I found this document for Microsost Windows Vista (but it uses a DirectX call):

http://download.microsoft.com/download/9/c/5/9c5b2167-8017-4bae-9fde-d599bac8184a/GraphicsMemory.doc

For Linux, one solution is to parse the output of lspci -vvv, locate the VGA compatible controller entry and look for the list of regions with prefetchable memory size.

For example, it looks like that on my nVidia GeForce 6800 with 256MB:

01:00.0 VGA compatible controller: nVidia Corporation NV41.1 [GeForce 6800] (rev a2) (prog-if 00 [VGA controller])
Subsystem: nVidia Corporation Unknown device 0245
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B-
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR-
Latency: 0
Interrupt: pin A routed to IRQ 16
Region 0: Memory at dd000000 (32-bit, non-prefetchable) [size=16M]
Region 1: Memory at c0000000 (64-bit, prefetchable) [size=256M]
Region 3: Memory at de000000 (64-bit, non-prefetchable) [size=16M]
[virtual] Expansion ROM at dfe00000 [disabled] [size=128K]
Capabilities: <access denied>

jeffb
08-17-2008, 06:56 PM
This isn't an accurate way to get the framebuffer size, the size of the BAR may not equal the size of the framebuffer. This is common with larger FBs.

overlay
08-17-2008, 10:06 PM
Hi jeffb,

Where did Adrenalin ask for the framebuffer size? What do you call BAR, please? Are you commenting the DirectX call or the Linux method?

jeffb
08-17-2008, 10:29 PM
overlay described a way to dump the PCI Base Address Registers on Linux, and the implication was that this:

> Region 1: Memory at c0000000 (64-bit, prefetchable) [size=256M]

means that it has 256MB of graphics memory. But that's not a safe assumption to make.

tamlin
08-18-2008, 11:00 AM
jeffb is correct. I believe all cards with over 256MB VRAM presents it in smaller "chunks". I've personally only seen a max of 256MB areas.

The reason for this, I suspect, is that if you had a 1GB card and had to map this all in one chunk, it would suck up 1GB of virtual address space (for the kernel). For 32-bit Windows that's over half of its usable space - not to mention it'd make it impossible to run Windows with the /3GB switch.

But back to OP's question - there is no function in OpenGL to query the amount of VRAM.

overlay
08-18-2008, 01:43 PM
I confirm jeffb and tamlin comments. This is the result on a nVidia Quadro FX 3600M with 512MB:


01:00.0 VGA compatible controller: nVidia Corporation Device 061c (rev a2)
Subsystem: Dell Device 019b
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 0
Interrupt: pin A routed to IRQ 16
Region 0: Memory at f5000000 (32-bit, non-prefetchable) [size=16M]
Region 1: Memory at e0000000 (64-bit, prefetchable) [size=256M]
Region 3: Memory at f2000000 (64-bit, non-prefetchable) [size=32M]
Region 5: I/O ports at ef00 [size=128]
[virtual] Expansion ROM at f6e00000 [disabled] [size=128K]
Capabilities: [60] Power Management version 3
Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
Status: D0 PME-Enable- DSel=0 DScale=0 PME-
Capabilities: [68] Message Signalled Interrupts: Mask- 64bit+ Queue=0/0 Enable-
Address: 0000000000000000 Data: 0000
Capabilities: [78] Express (v1) Endpoint, MSI 00
DevCap: MaxPayload 128 bytes, PhantFunc 0, Latency L0s <256ns, L1 <4us
ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset-
DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported-
RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
MaxPayload 128 bytes, MaxReadReq 512 bytes
DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr- TransPend-
LnkCap: Port #0, Speed 2.5GT/s, Width x16, ASPM L0s L1, Latency L0 <256ns, L1 <1us
ClockPM- Suprise- LLActRep- BwNot-
LnkCtl: ASPM L0s L1 Enabled; RCB 128 bytes Disabled- Retrain- CommClk+
ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
LnkSta: Speed 2.5GT/s, Width x16, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
Capabilities: [100] Virtual Channel <?>
Capabilities: [128] Power Budgeting <?>
Capabilities: [600] Vendor Specific Information <?>
Kernel driver in use: nvidia
Kernel modules: nvidiafb, nvidia

Tom Flynn
10-12-2008, 11:51 PM
Yeah, it's too bad this info isn't easily available. I know the driver has to have this information. Just let us retrieve it ;-)

Anyway, one approach might be to setup a 1x1 window and each frame call glGenTextures() to get a new texid, glBindTexture() to use that texid, then glTexImage2D() with a pointer to texsize x texsize x 4 buffer you've allocated at the start of the app, draw a triangle (to make sure the texture gets sent down) and call glAreTexturesResident() with the list of texids you've used so far. When glAreTexturesResident() fails, you have a rough idea of how much video memory you have available for your app. (don't forget to delete the textures you've sent down once you're finished :)

maybe there's an easier way to go about this, but this is the first thing that comes to mind.

dylhoxic
10-13-2008, 01:22 AM
With SDL, you can easily get the amount of video memory :

http://www.libsdl.org/cgi/docwiki.cgi/SDL_VideoInfo

yooyo
10-13-2008, 01:55 AM
My friend, one of the black belt Intel programmers give me following piece of code... Works on NVidia.



#include <stdio.h>
#include <windows.h>

DWORD GetVideoMemorySizeBytes(void)
{
DWORD i[5] = { 0, 0, 0x27, 0, 0 };
DWORD o[5] = { 0, 0, 0, 0, 0 };

HDC hdc = CreateDC("DISPLAY", 0, 0, 0);

if (hdc == NULL) {
return 0;
}

int s = ExtEscape(hdc, 0x7032, 0x14, (LPCSTR)i, 0x14, (LPSTR)o);

DeleteDC(hdc);

if (s <= 0) {
return 0;
}

return o[3] * 1048576;
}

int main(int argc, char *argv[])
{
printf("Video memory size : %ld bytes\n", GetVideoMemorySizeBytes());
}

Korval
10-13-2008, 02:11 AM
Works on NVidia.

And on ATi?

tamlin
10-13-2008, 02:11 AM
While current hardware shares the same RAM for framebuffer and all other buffers, even if unlikely it is a possibility that a future implementation might have separate RAM for framebuffer, textures, and "other" (for example to implement different compression-schemes on different kinds of data).

Also, on an unified memory setup (UMA), I could easily see it having e.g. 32MB of dedicated RAM for framebuffer and possibly some caching, and the rest being borrowed from system RAM. How should such a setup be reported, as there are performance differences between the different kinds of memory, and the reason to ask the implementation about memory is performance related?

It's true that today most (all, except UMA?) gfx hardware has only a contigous chunk of RAM it uses, but OpenGL shouldn't really lock itself into a single implementation (unlike DirectX) and therefore has to look forward; will that always be the case?

Perhaps a "solution" could be a (possibly everlasting) EXT function (as this question indeed surfaces often enough to IMHO warrant something)?

Comments?

Tom Flynn
10-13-2008, 09:33 AM
Works on NVidia.

And on ATi?

And not Windows-only?-)

Tom Flynn
10-13-2008, 10:16 AM
Also, on an unified memory setup (UMA), I could easily see it having e.g. 32MB of dedicated RAM for framebuffer and possibly some caching, and the rest being borrowed from system RAM. How should such a setup be reported, as there are performance differences between the different kinds of memory, and the reason to ask the implementation about memory is performance related?

Of course it's performance related. That's why there's calls asking whether textures are resident, whether a texture will fit into ram (proxy), how to prioritize textures, why there's hints as to whether a vbo is static or dynamic draw, etc. It's all for trying to make sure an app has what it considers performance critical in the fastest memory available.



It's true that today most (all, except UMA?) gfx hardware has only a contigous chunk of RAM it uses, but OpenGL shouldn't really lock itself into a single implementation (unlike DirectX) and therefore has to look forward;

Totally agree. Variants of OpenGL are used all the way from Desktops, to consoles, to mobile devices. Usage of ram varies among all those devices. But the OpenGL driver in each of those knows how much ram it has to work with in order to determine whether a texture can be resident or a static vbo can go into faster ram. From an app point of view, it can use that number to plan accordingly. The app is going to need to plan its resources differently if it is on a small laptop vs. a desktop with 8GBs ram and 1GB of video memory.



Perhaps a "solution" could be a (possibly everlasting) EXT function (as this question indeed surfaces often enough to IMHO warrant something)?

Comments?

Yeah, that would be nice. And I don't think it matters much to the app whether that is a GL_EXT function or a GLX_EXT / WGL_EXT type of function.

James_Bond
08-12-2009, 07:33 AM
Up.

I'm currently looking for a method to do this under Linux (already done for MAC OSX and Windows if someone's interested).
This is possible for nVidia cards by parsing nvidia-settings command but there's no way to do this for ATI cards.

I haven't find a generic way yet. /proc/ maybe? Any thoughts (it's been 10 months, plenty of time to think about it :D)?

elFarto
08-12-2009, 08:23 AM
Go-go GL_ATI_meminfo (http://www.opengl.org/registry/specs/ATI/meminfo.txt). Now if only Nvidia would implement this...

Regards
elFarto

James_Bond
08-12-2009, 11:53 PM
Do you know if it is also possible to retrieve the number of gpus and their names because AFAICS GL_ATI_meminfo only deals with vram (which is a good start though :) )?


Now if only Nvidia would implement this...
True. Or it should be an ARB extension like GL_ARB_meminfo...

Brolingstanz
08-13-2009, 05:26 AM
What about WBEM? Windows' WMI is based on it and I believe there's an implementation for Apple, Linux and Solaris.

James_Bond
08-13-2009, 06:01 AM
...I believe there's an implementation for Apple, Linux and Solaris.
Yes there is. Thanks.
http://openwbem.sourceforge.net/

But for now, I'll deal only with nVidia gpus, parsing nvidia-settings outputs. Too bad for ATI/AMD cards...

V-man
08-13-2009, 07:08 AM
Up.

I'm currently looking for a method to do this under Linux (already done for MAC OSX and Windows if someone's interested).
This is possible for nVidia cards by parsing nvidia-settings command but there's no way to do this for ATI cards.

I haven't find a generic way yet. /proc/ maybe? Any thoughts (it's been 10 months, plenty of time to think about it :D)?

GL_ATI_meminfo is nice but why do you absolutely need that info? Your making a program that gives system info?

James_Bond
08-14-2009, 01:10 AM
Yes. And particularly graphics card infos such as :
- number of GPUs,
- vram size by GPU,
- name of each GPU,
- allocated and available memory of each GPU (I've pushed this feature aside).

I've done this for WIN, MAC OS and Linux (nvidia cards only).

Dark Photon
08-14-2009, 08:28 PM
But for now, I'll deal only with nVidia gpus, parsing nvidia-settings outputs. Too bad for ATI/AMD cards...
For NVidia on Linux, there's the libXNVCtrl (which uses the NV-CONTROL X extension), which I believe is what nvidia-settings is built on.

You can call these APIs directly in your own apps. Saves parsing nvidia-settings outputs.

Here's some source that queries GPU mem (in KB) for NVidia cards via NV-CONTROL:



int ram_size;
XNVCTRLQueryAttribute( dpy, scr, 0, NV_CTRL_VIDEO_RAM,
&amp;ram_size );


If you want a working program, just give the word.