PDA

View Full Version : How to query the total amount of video memory?



991060
01-11-2005, 08:00 AM
I've heard some vendor specific driver APIs can do this, but is there any general way? How do you usually get the amount of video memory?

Thanks.

zeckensack
01-11-2005, 08:57 AM
Not.
Let users control texture detail (and geometry detail if you use VBOs and your geometry data is really large).
Doom3 runs on 64MB cards in ultra detail, it just runs slow. It's not your business to decide whether or not this is acceptable to your users.

stig
01-11-2005, 09:17 AM
It would often be useful to provide sensible defaults based on available video memory. However I agree that the user should be able to change/override the memory/detail settings if they want to.

Depending on the nature of the application (game, cad, vr etc) most user might have no idea what video memory is. This again might force software developers to choose a very low default to prevent support nightmares.

bobvodka
01-11-2005, 10:05 AM
if I was going todo something like this then I would detect the vram values on install and write a config with a sane default, but then let the user adjust as needs be

991060
01-11-2005, 10:07 AM
Just because I want to know the users' spec doesn't mean their ability to change game setting is lost. I dont see how providing a sensible default setting conflicts with the flexibility of the game engine.

I just want to detect the amount of video memory, it's as simple as it seems. How to provide a better user experience is left to another question.

bobvodka
01-11-2005, 10:17 AM
thus why i'd do it on install via OS specific calls if they exist.

KRONOS
01-11-2005, 10:48 AM
Why can glGet return that? You can query almost everything in GL, except the amount of memory the server has. I guess there is probably a good reason for that, but I can't see it...

Korval
01-11-2005, 11:06 AM
The answer to that question is pretty simple: what happens when your assumptions about the size of video memory are no longer valid?

If you see that a card has 128MB of RAM, you make assumptions based on that. Namely, you assume that you can freely use about 100 (or so) MB of textures and VBO's before blowing the system. More importantly, you assume that your "resident" textures are all stored in video memory.

However, what if you see a card that has 64MB of RAM, but uses its video RAM as a cache? This is possible, where the driver feeds pieces of textures to the card via interrupts (and with PCIe, it's a little more reasonable). By all rights, you will get pretty good performance from using 256MB of texture. Your assumptions about how the driver will use that video memory are no longer valid, and thus you make poor decisions based on them.

Stephen_H
01-11-2005, 11:11 AM
I agree that there is a valid argument for hiding the VRAM size from the users when programming games or other traditonal graphics applications. I think things 3D have changed enough over the past few years that this classic arguement no longer holds water in certain situations.

For GPGPU applications the ability to know the amount of RAM is a sadly missing ability that is much needed. GPGPU applications don't follow the same texture/VRAM usage patterns as traditional 3d apps and the drivers don't always do a good job of managing their data.

Typically, they allocate a few very large floating point textures that completely fill VRAM. In such situations, the OpenGL API hiding memory information and handling texture 'cacheing' can cause problems... if the driver decides to cache out a 4k by 4k float texture thats a serious performance hit when that texture is later needed. Because the GPGPU application knows its VRAM usage patterns much better than the driver it makes sense to expose some of this functionality.

For typical 3D apps, I agree that the old OpenGL functionality is arguably sufficient, although I have heard of people writing their own texture management layer ontop of OpenGL because the drivers management was causing bad sputtering in the framerate (although to varying success because you can only suggest things to OpenGL and not tell it store a given texture on the card).

Korval
01-11-2005, 11:50 AM
For GPGPU applications the ability to know the amount of RAM is a sadly missing ability that is much needed. GPGPU applications don't follow the same texture/VRAM usage patterns as traditional 3d apps and the drivers don't always do a good job of managing their data. You're trying to put a square peg into a round hole already; you shouldn't be surprised if it doesn't fit all the way, or leave gaps in the side.

OpenGL is, and must always be, a graphics library, not a GPGPU library. If you're not displaying images, OpenGL does not have to support you in any way. You might still be able to make OpenGL do what you want it to, but OpenGL does not need to provide you with features that help your interests. Especially if it is at the expense of the needs of actual graphics applications.

I'm not saying that GPGPU is a bad thing. I'm just saying that OpenGL should not add features expressly for the purpose of helping the GPGPU crowd, since that is not the purpose of OpenGL.

NitroGL
01-11-2005, 12:09 PM
While I agree with the others that doing so is a bad thing, check this out: http://www.area3d.net/nitrogl/VideoMemory.zip

Uses DirectDraw... I haven't tried it within the same process/thread as an OpenGL app though.

speedy
01-11-2005, 01:19 PM
Things that have clues in the direction you look for:

* NVIDIA SDK 8.0, GetGPUAndSystemInfo example in the DirectX folder

* Entech (of Power Strip fame) GPU detection library (as advertised in 3D Mark 2005 loading screen)

I saw that Prince Of Persia 2 has a pretty through user configuration detection dialog including the VRAM size, dunno which metod they used.

And now to some real world experiences, NVIDIA has a certain treshold above which it reports out of memory and ATI on the contrary accepts any amount of textures even if it trashes the card to death.

In the last product I worked on we had to manualy force texture LOD down a step and *reload* all textures if the loading failed on NVIDIA and to pray that user didn't think that ATI got stuck when the framerate got low because of massive texture trashing.

It would be really nice to know the amount of memory available so that we could automatically adjust the data sets and LOD states to a sane default achieving better quality on a broader range of currently available cards (and, ofcourse, allow user to manually override things if she/he wished to)

Nutty
01-11-2005, 03:48 PM
Cant you just do that based off the name of the graphics card ?

Korval
01-11-2005, 04:12 PM
to pray that user didn't think that ATI got stuck when the framerate got low because of massive texture trashing.BTW, it's thrashing, not trashing.


Cant you just do that based off the name of the graphics card ?Then, your shiny new GeForce 6600 doesn't work on Doom 3, simply because the 6600 came out after Doom 3 shipped.

Nutty
01-11-2005, 04:42 PM
Sure it would, just look for the major version number bit in the string.. IF you dont recognise it, assume next gen and put the detail up.

Cant see its that much of an issue tbh.. I never play a game as installed, I always tweak the settings 1st.

yjh1982
01-11-2005, 04:46 PM
just query max texture support.

I think it closely card memory :rolleyes:

SirKnight
01-11-2005, 05:21 PM
Originally posted by yjh1982:
just query max texture support.

I think it closely card memory :rolleyes: Huh? Clarify please...

-SirKnight

yjh1982
01-11-2005, 05:51 PM
Originally posted by SirKnight:

Originally posted by yjh1982:
just query max texture support.

I think it closely card memory :rolleyes: Huh? Clarify please...

-SirKnightbecause texture must load into card memory when use ,isn't it ? I don't think ATI or NVidia's driver will support large texture much more card memory:D

SirKnight
01-11-2005, 06:01 PM
Looking at your cards max texture size isn't going to tell you very much about how much VRAM you have. Consider the GeForce FX and 6800. They both support 4096x4096 max 2D texture sizes. A 32bit 4096x4096 texture only consumes 64MB of memory. Yet these cards have VRAM sizes much larger. True no card vendor is going to support a texture size that would swamp the amount of memory on the card (at least how things are done currently) but that doesn't mean anything about the VRAM amount a given card will have. There will be 512MB cards very soon based off of current GPU architectures still having the 4096 limit.

-SirKnight

yjh1982
01-11-2005, 06:17 PM
but internalformat have GL_RGBA12 or GL_RGBA16.
can't use? :eek:

yjh1982
01-11-2005, 06:19 PM
excuse me ...GL_RGB16 only 128M.....

SirKnight
01-11-2005, 06:57 PM
Though you might be able to use other formats to consume more memory, you will never find a format that takes ALL of the VRAM b/c you MUST have room for the framebuffer at least. So creating the largest memory consuming texture you can still won't give much of a clue to the VRAM a user's card has. :)

-SirKnight

991060
01-11-2005, 07:31 PM
Ok, I'm back from bed, thanks for the attention.

Basically, what we want to do with the detected number is just making some very basic assumtions and adjusting texture/geometry Lod based on it. I don't think we'll depend on the number so seriously that every bit of the VRAM is used.

Speedy, do you know where I can find info about the Entech library? Looks like it's a commerical product, right?

Korval
01-11-2005, 10:23 PM
Basically, what we want to do with the detected number is just making some very basic assumtions and adjusting texture/geometry Lod based on it. I don't think we'll depend on the number so seriously that every bit of the VRAM is used.Yes, but, as I pointed out, making any assumptions based on it is not wise. Or, at least, not future proof.

You could just ask the user to select an appropriate memory size. NWN did it like that, and I prefer it that way. You don't confuse a user by asking about "detail levels"; you just ask him what his card has in it.

Korrigan
01-12-2005, 01:40 AM
Knowing the amount of video memory can be useful to set default settings, I don't know why some guys here are so strictly against it. And even Doom III actually detects the amount of memory of your graphic card for its defaults. I agree that it should only used as a clue and not as divine truth though ;) and that the user should be able to alter the settings at will.

Just use DirectX to query the amount of video mem if you need it at the start of your application.

kansler
01-15-2005, 04:15 AM
Why not upload 1 mb textures one at a time using glPrioritizeTextures and checking glAreTexturesResident after each upload? Or am I missing something here?

harsman
01-15-2005, 06:23 AM
There's no guarantee that glAreTexturesResident returns useful info. It might always return true. Besides, 3dLabs chipsets have a form of virtual memory where only the texels needed for rendering are paged in so on their cards, results would be meaningless as well.

zed
01-15-2005, 10:47 AM
if youre using sdl u can use the following

const SDL_VideoInfo* info = NULL;
info = SDL_GetVideoInfo( );

info->video_mem

im not 100% sure what it returns (does anyone know? whos delved into the sdl code)

JD
01-15-2005, 03:09 PM
Well, most games I know of put all resources into vram for speed reasons. I would just target one memory size ie. 64mb for example and design my world maps to fit into that limit. It's much easier to deal with all things related to game dev when you target just a small set of capabilities otherwise the system complexity can get out of hand. This goes for system memory as well.

knackered
01-16-2005, 06:00 AM
Err, does that apply to PS2 games too?!
I think you're mixing up video memory with system memory - most games target a specific system memory quantity, not video memory. A minimum video memory would be specified in order to hold the amount of the world visible in a single frame, not the whole world itself.

JD
01-16-2005, 10:52 AM
World is divided into maps or levels. At beginning of play of such level, all textures of that level are loaded into vram. When level change occurs, existing textures are offloaded from the vram and new set of textures are loaded into it. This will prevent the card from sourcing textures from the agp or worse system ram. The system ram is another issue but closely related to gfx. You don't want to go to disk to load objects into game. You want them to be loaded into sys. ram first then made avail. fast for that particular level gamer is playing ofcourse. You don't load the whole world ie. all the maps into the memory since you don't play all maps all the time, just some maps or one map.

For non-PC games the computer architexture is built differently that the latency that you see in a PC game isn't visible. That's why you can stream data off the cdrom easily in playstation for example and if you did the same in PC cdrom not only does it not spin all the time to further reduce latency but there is a major problem with the ata speeds and trying to keep the cpu from starving. Two different architectures. But who cares about console games around here? I don't and most don't either.

knackered
01-16-2005, 12:09 PM
Originally posted by JD:
This will prevent the card from sourcing textures from the agp or worse system ram.What do you think AGP was invented for? To speed up load times between levels in games?! It is, of course, perfectly acceptable to page textures into video memory from AGP memory during play. What you don't want is to be doing this several times while rendering a single frame. I haven't a clue where you got your information about common practice from.


The system ram is another issue but closely related to gfx. You don't want to go to disk to load objects into game. You want them to be loaded into sys. ram first then made avail. fast for that particular level gamer is playing ofcourse.That's awfully considerate of you - I'm sure the player appreciates that, while the graphics are crap, at least there's no load times. Worth every penny of the 30. You do, of course, want to go to disk to load objects into your game, between things called 'levels'.



You don't load the whole world ie. all the maps into the memory since you don't play all maps all the time, just some maps or one map.Which game are you talking about? I get the impression you're talking about one specific game you've played, and not the wide variety of game genres there are sloshing around these days.


For non-PC games the computer architexture is built differently that the latency that you see in a PC game isn't visible.A unified memory model does make things easier, yes. Texture updates no longer have to considered so carefully, as there's virtually no cost....but that's mainly compared to system memory-to-video memory uploads, not so much AGP to video memory.