View Full Version : gl texture memory

02-24-2003, 07:05 AM
just a quicky, this one: any (standard) way of getting the amount of free texture memory through opengl? i've got too many high res textures to hope they'll all upload, knowing how much tex ram is available would help my mem manager no end... ta http://www.opengl.org/discussion_boards/ubb/smile.gif

02-24-2003, 07:44 PM
There is no way, as N megs may be gone because of frame buffers, other apps, different screen resolution, etc etc etc.

The user knows what the appropriate performance/quality trade-off is, so provide a slider for him/her.

02-24-2003, 08:37 PM
Sure there is. Just keep on loading a few textures (properly coded) and eventually a call to glTexImagexD will fail.

That will get you the amount in video mem + AGP mem.

If the driver doesn't flag an error, then your loop may run forever so watch out.

02-24-2003, 09:45 PM
Originally posted by V-man:
Sure there is. Just keep on loading a few textures (properly coded) and eventually a call to glTexImagexD will fail.

Err... No, won't it just start swapping the textures to disk (or to ordinary RAM - which in Windoze means to the slowest most fragmented piece of disk available - or floppy http://www.opengl.org/discussion_boards/ubb/wink.gif)...

I think the standard way this is handled is to simply provide the user with a detail setting. If your app performs poorly on their system, they turn the detail level down a bit to improve performance...

02-24-2003, 11:39 PM
bummer, well thanks anyway, folks... how's this for an idea though: upload textures one by one and measure how long does the upload take - then cut off when it starts slowing down? would this require a timer more accurate than QueryPerformanceCounter (1.193 mhz == 0.838 microseconds accuracy)? are there enough trees in the world? http://www.opengl.org/discussion_boards/ubb/wink.gif

edit: btw, i have no choice but to use 1024x1024 textures (eek) that's why i'm trying to come up with some sort of tex mem manager, to cache stuff as intelligently as possible...

[This message has been edited by mattc (edited 02-25-2003).]

02-25-2003, 02:14 AM
Opengl manages the textures for you..

and you can use the texture proxy to query if a texture fits resident in the card.

scenario : you have a manager and uses proxy and all to fit the maximal amount of textures in the card, and then the user decides to start another program that uses video mem.. then the textures you nicely put there can be removed.. and you have a lot less memory to play with, how should your manager solve that? the opengl driver solves that nicely, becourse it knows when a texture is thrown away, and knows that it needs to reload it if it shows on screen.

02-25-2003, 02:37 AM
it's not about that... the nature of the stuff i'm working on is such that all textures are gonna be same high res and dynamically generated - current conservative estimate is easily over 128 meg (without compression), and over a gig without any optimisations.

seeing as i'm gonna be rendering to texture and uploading all the time, i figured the best approach would be to use the texture memory as a cache of sorts cos there's no hope of all the stuff being resident - and there's no need anyhow...

sorry if all this is too obscure, got an nda so i can't tell you exactly why i got so much high res texturing to do http://www.opengl.org/discussion_boards/ubb/smile.gif

going back to my question: would it be viable to try and time texture uploads and quit when it starts growing or something? current (minimum) texture size is 1024x512x4 (2 meg), would QueryPerformanceCounter() be accurate enough for measuring such uploads?

02-25-2003, 03:03 AM
try it

Tom Nuydens
02-25-2003, 03:53 AM

-- Tom

02-25-2003, 03:59 AM
i was gonna say "brilliant", but then found this in msdn on glAreTexturesResident()...

"...If textures reside in virtual memory (there is no texture memory), they are considered always resident."


edit: what i mean is, this prolly applies to textures held in agp memory as well...

[This message has been edited by mattc (edited 02-25-2003).]

Tom Nuydens
02-25-2003, 04:26 AM
On my 64MB GeForce3, I can have 55MB of resident textures, so that sounds perfect. Granted, however, the function's behaviour isn't well-specified enough to guarantee useful results on all cards/drivers. Has anyone tested it on a Radeon?

-- Tom

[This message has been edited by Tom Nuydens (edited 02-25-2003).]

02-25-2003, 04:31 AM

are you sure, that there is no way ?
i believe that there is a small program on the nVidia webpage; i don't know exactly the name - unfortunately, this program cames without source - i think that this program shows the memory;

02-25-2003, 04:32 AM
Originally posted by mattc:
edit: what i mean is, this prolly applies to textures held in agp memory as well...

I used it and it only applied to textures held in the memory on the graphics card (I tested it on an nVidia card).


Tom Nuydens
02-25-2003, 04:39 AM
Hmm. I just checked on a Parhelia and it always returns GL_FALSE http://www.opengl.org/discussion_boards/ubb/frown.gif

-- Tom

02-25-2003, 06:10 AM
cheers everyone http://www.opengl.org/discussion_boards/ubb/smile.gif going by my past experience with matrox drivers, they could easily have small bugs in surprising places - sounds to me like it's worth bringing this one up with the driver dev team...

02-25-2003, 06:33 AM
here is the URL to the above mentioned program:

02-25-2003, 06:42 AM
djsnow, i appreciate the link but there's no source and anyhow, it's not agp memory that i'm interested in, it's the texture memory on the card...

02-25-2003, 07:25 AM
You could use DirectX to query the amount of memory available.
If you have the DirectX SDK installed, you have a programm called CapsViewer and this shows under DirectDraw Devices the amount of available local, non-local and texture memory. This should give you at least a rough estimate (16mb,32mb,64mb or 128mb card).

Of course this not possibilty for non windows applications, but there are other ways getting the amount of available memory.


02-25-2003, 07:27 AM
i could do that on the pc, but generally it's not a good idea to have d3d and gl running at the same time, plus there are portability issues (ie how to do it on other platforms)...

02-25-2003, 08:24 AM
>>You could use DirectX to query the amount of memory available<<

wether the result u get back is anywhere near to accurate is another question http://www.opengl.org/discussion_boards/ubb/smile.gif
ie dont use it

02-25-2003, 09:04 AM
You can also try to not fixate to much on the amount of available card memory. Just do some testruns with different setups (5mb 10mb 15mb and so on), where you render a couple of hidden frames (of a completet scene). Then measure the performance you get with these and take the one that fits best. I mean would it be a problem if you get the performance you want even when the card uses non-local memory?
Depending on your application you could do this more dynamic and decrease the size of your texture cache when performance drops.


02-25-2003, 02:37 PM
Originally posted by mattc:
edit: btw, i have no choice but to use 1024x1024 textures (eek) that's why i'm trying to come up with some sort of tex mem manager, to cache stuff as intelligently as possible...

So you MUST have a screen resolution of about 7168 by around 5000 in order to display your 128Meg of 1024x1024 pixels. That must be one special monitor...

Or is the app running at 800x600?

02-25-2003, 11:31 PM
lars: that's pretty much my plan of action for now http://www.opengl.org/discussion_boards/ubb/wink.gif

rgpc: what exactly does screen res have to do with anything? anyway, the app can run in any res/window size, if you think that matters... just tell me why http://www.opengl.org/discussion_boards/ubb/wink.gif

02-25-2003, 11:40 PM
rgpc is just confused. You see some engineers do back of the envelope calculations about texture memory requirements based on near 100% efficient paging, load management, and memory management (including mip maps for example) and figure that you don't need many more texels than you have pixels.

It seems incredible to anyone with practical notions about rendering in any kind of large complex environment but there you go.

02-25-2003, 11:54 PM
fair enough http://www.opengl.org/discussion_boards/ubb/smile.gif it's just that all textures have to retain detail when seen close up, individually... believe me, i wish i could do everything with 1x1 textures http://www.opengl.org/discussion_boards/ubb/wink.gif

02-26-2003, 12:11 AM
Well a sophisticated paging scheme might do better than storing all textures of full resolution even when they are distant. But practical limits make this difficult and never perfect, paging load management becomes a significant issue among others, like figuring out what you need when and avoiding large on demand paging requirements. You should still try and do better than a completely naive scheme, but it depends on what you're after and your target hardware.

02-26-2003, 12:18 AM
there's a distinct possibility that all textures will be dynamically generated, in which case upload speed is going to be the limiting factor... i plan on maintaining a "potentially visible texture set", if you see what i mean, though through optimisation this may turn to be always below 50 meg or so, in which case i'll just throw everything in there and concentrate on getting the fastest upload...

02-26-2003, 09:05 AM
If you can swing it, keeping track of near vs far and choosing between, say, two levels (1024x1024 vs 256x256) is possible, and will help with the footprint scenario. Once you have that working, you can, conceivably, extend the system to a third (or more) levels for incrementally more savings.

This can be done, and is done in reality (such as the product I'm working on) but it does trade off a little system memory (for the multiple texture copies) and/or CPU (for generating new texture images). You also can't hope for 100% efficiency -- even getting 25% efficiency (four texels per screen pixel) is Really Hard (tm).

02-27-2003, 12:01 AM
loooooool http://www.opengl.org/discussion_boards/ubb/smile.gif now that's a good trademark http://www.opengl.org/discussion_boards/ubb/wink.gif

cpu is going to be busy generating textures all the time, and the texture res is fixed size (1024x512, hopefully)... resampling things down for distant objects wouldn't be too tricky but it's a further cpu hit (and there are background threads, so the cpu's really gonna get it)... then the upload... boo hoo http://www.opengl.org/discussion_boards/ubb/wink.gif

however, the specs are very flexible at this stage (aren't they always http://www.opengl.org/discussion_boards/ubb/wink.gif) - if i can lobby the non techies, i might be able to use texture objects as placeholders so that the card never needs to use agp memory (though this requires that the fogging distance is really quite severe, but they don't seem to mind)...

somewhat ot: i had a look at gl2.0 pdf (3dlabs whitepaper), looks like texture memory and other such niceties will be accessible - or am i dreaming again?