How to query the total amount of video memory?

I’ve heard some vendor specific driver APIs can do this, but is there any general way? How do you usually get the amount of video memory?

Thanks.

Not.
Let users control texture detail (and geometry detail if you use VBOs and your geometry data is really large).
Doom3 runs on 64MB cards in ultra detail, it just runs slow. It’s not your business to decide whether or not this is acceptable to your users.

It would often be useful to provide sensible defaults based on available video memory. However I agree that the user should be able to change/override the memory/detail settings if they want to.

Depending on the nature of the application (game, cad, vr etc) most user might have no idea what video memory is. This again might force software developers to choose a very low default to prevent support nightmares.

if I was going todo something like this then I would detect the vram values on install and write a config with a sane default, but then let the user adjust as needs be

Just because I want to know the users’ spec doesn’t mean their ability to change game setting is lost. I dont see how providing a sensible default setting conflicts with the flexibility of the game engine.

I just want to detect the amount of video memory, it’s as simple as it seems. How to provide a better user experience is left to another question.

thus why i’d do it on install via OS specific calls if they exist.

Why can glGet return that? You can query almost everything in GL, except the amount of memory the server has. I guess there is probably a good reason for that, but I can’t see it…

The answer to that question is pretty simple: what happens when your assumptions about the size of video memory are no longer valid?

If you see that a card has 128MB of RAM, you make assumptions based on that. Namely, you assume that you can freely use about 100 (or so) MB of textures and VBO’s before blowing the system. More importantly, you assume that your “resident” textures are all stored in video memory.

However, what if you see a card that has 64MB of RAM, but uses its video RAM as a cache? This is possible, where the driver feeds pieces of textures to the card via interrupts (and with PCIe, it’s a little more reasonable). By all rights, you will get pretty good performance from using 256MB of texture. Your assumptions about how the driver will use that video memory are no longer valid, and thus you make poor decisions based on them.

I agree that there is a valid argument for hiding the VRAM size from the users when programming games or other traditonal graphics applications. I think things 3D have changed enough over the past few years that this classic arguement no longer holds water in certain situations.

For GPGPU applications the ability to know the amount of RAM is a sadly missing ability that is much needed. GPGPU applications don’t follow the same texture/VRAM usage patterns as traditional 3d apps and the drivers don’t always do a good job of managing their data.

Typically, they allocate a few very large floating point textures that completely fill VRAM. In such situations, the OpenGL API hiding memory information and handling texture ‘cacheing’ can cause problems… if the driver decides to cache out a 4k by 4k float texture thats a serious performance hit when that texture is later needed. Because the GPGPU application knows its VRAM usage patterns much better than the driver it makes sense to expose some of this functionality.

For typical 3D apps, I agree that the old OpenGL functionality is arguably sufficient, although I have heard of people writing their own texture management layer ontop of OpenGL because the drivers management was causing bad sputtering in the framerate (although to varying success because you can only suggest things to OpenGL and not tell it store a given texture on the card).

For GPGPU applications the ability to know the amount of RAM is a sadly missing ability that is much needed. GPGPU applications don’t follow the same texture/VRAM usage patterns as traditional 3d apps and the drivers don’t always do a good job of managing their data.
You’re trying to put a square peg into a round hole already; you shouldn’t be surprised if it doesn’t fit all the way, or leave gaps in the side.

OpenGL is, and must always be, a graphics library, not a GPGPU library. If you’re not displaying images, OpenGL does not have to support you in any way. You might still be able to make OpenGL do what you want it to, but OpenGL does not need to provide you with features that help your interests. Especially if it is at the expense of the needs of actual graphics applications.

I’m not saying that GPGPU is a bad thing. I’m just saying that OpenGL should not add features expressly for the purpose of helping the GPGPU crowd, since that is not the purpose of OpenGL.

While I agree with the others that doing so is a bad thing, check this out: http://www.area3d.net/nitrogl/VideoMemory.zip

Uses DirectDraw… I haven’t tried it within the same process/thread as an OpenGL app though.

Things that have clues in the direction you look for:

  • NVIDIA SDK 8.0, GetGPUAndSystemInfo example in the DirectX folder

  • Entech (of Power Strip fame) GPU detection library (as advertised in 3D Mark 2005 loading screen)

I saw that Prince Of Persia 2 has a pretty through user configuration detection dialog including the VRAM size, dunno which metod they used.

And now to some real world experiences, NVIDIA has a certain treshold above which it reports out of memory and ATI on the contrary accepts any amount of textures even if it trashes the card to death.

In the last product I worked on we had to manualy force texture LOD down a step and reload all textures if the loading failed on NVIDIA and to pray that user didn’t think that ATI got stuck when the framerate got low because of massive texture trashing.

It would be really nice to know the amount of memory available so that we could automatically adjust the data sets and LOD states to a sane default achieving better quality on a broader range of currently available cards (and, ofcourse, allow user to manually override things if she/he wished to)

Cant you just do that based off the name of the graphics card ?

to pray that user didn’t think that ATI got stuck when the framerate got low because of massive texture trashing.
BTW, it’s thrashing, not trashing.

Cant you just do that based off the name of the graphics card ?
Then, your shiny new GeForce 6600 doesn’t work on Doom 3, simply because the 6600 came out after Doom 3 shipped.

Sure it would, just look for the major version number bit in the string… IF you dont recognise it, assume next gen and put the detail up.

Cant see its that much of an issue tbh… I never play a game as installed, I always tweak the settings 1st.

just query max texture support.

I think it closely card memory :rolleyes:

Originally posted by yjh1982:
[b]just query max texture support.

I think it closely card memory :rolleyes: [/b]
Huh? Clarify please…

-SirKnight

Originally posted by SirKnight:
[b] [quote]Originally posted by yjh1982:
[b]just query max texture support.

I think it closely card memory :rolleyes: [/b]
Huh? Clarify please…

-SirKnight[/b][/QUOTE]because texture must load into card memory when use ,isn’t it ? I don’t think ATI or NVidia’s driver will support large texture much more card memory:D

Looking at your cards max texture size isn’t going to tell you very much about how much VRAM you have. Consider the GeForce FX and 6800. They both support 4096x4096 max 2D texture sizes. A 32bit 4096x4096 texture only consumes 64MB of memory. Yet these cards have VRAM sizes much larger. True no card vendor is going to support a texture size that would swamp the amount of memory on the card (at least how things are done currently) but that doesn’t mean anything about the VRAM amount a given card will have. There will be 512MB cards very soon based off of current GPU architectures still having the 4096 limit.

-SirKnight

but internalformat have GL_RGBA12 or GL_RGBA16.
can’t use? :eek: