gl texture memory

just a quicky, this one: any (standard) way of getting the amount of free texture memory through opengl? i’ve got too many high res textures to hope they’ll all upload, knowing how much tex ram is available would help my mem manager no end… ta

There is no way, as N megs may be gone because of frame buffers, other apps, different screen resolution, etc etc etc.

The user knows what the appropriate performance/quality trade-off is, so provide a slider for him/her.

Sure there is. Just keep on loading a few textures (properly coded) and eventually a call to glTexImagexD will fail.

That will get you the amount in video mem + AGP mem.

If the driver doesn’t flag an error, then your loop may run forever so watch out.

Originally posted by V-man:
Sure there is. Just keep on loading a few textures (properly coded) and eventually a call to glTexImagexD will fail.

Err… No, won’t it just start swapping the textures to disk (or to ordinary RAM - which in Windoze means to the slowest most fragmented piece of disk available - or floppy )…

I think the standard way this is handled is to simply provide the user with a detail setting. If your app performs poorly on their system, they turn the detail level down a bit to improve performance…

bummer, well thanks anyway, folks… how’s this for an idea though: upload textures one by one and measure how long does the upload take - then cut off when it starts slowing down? would this require a timer more accurate than QueryPerformanceCounter (1.193 mhz == 0.838 microseconds accuracy)? are there enough trees in the world?

edit: btw, i have no choice but to use 1024x1024 textures (eek) that’s why i’m trying to come up with some sort of tex mem manager, to cache stuff as intelligently as possible…

[This message has been edited by mattc (edited 02-25-2003).]

Opengl manages the textures for you…

and you can use the texture proxy to query if a texture fits resident in the card.

scenario : you have a manager and uses proxy and all to fit the maximal amount of textures in the card, and then the user decides to start another program that uses video mem… then the textures you nicely put there can be removed… and you have a lot less memory to play with, how should your manager solve that? the opengl driver solves that nicely, becourse it knows when a texture is thrown away, and knows that it needs to reload it if it shows on screen.

it’s not about that… the nature of the stuff i’m working on is such that all textures are gonna be same high res and dynamically generated - current conservative estimate is easily over 128 meg (without compression), and over a gig without any optimisations.

seeing as i’m gonna be rendering to texture and uploading all the time, i figured the best approach would be to use the texture memory as a cache of sorts cos there’s no hope of all the stuff being resident - and there’s no need anyhow…

sorry if all this is too obscure, got an nda so i can’t tell you exactly why i got so much high res texturing to do

going back to my question: would it be viable to try and time texture uploads and quit when it starts growing or something? current (minimum) texture size is 1024x512x4 (2 meg), would QueryPerformanceCounter() be accurate enough for measuring such uploads?

try it

glAreTexturesResident()?

– Tom

i was gonna say “brilliant”, but then found this in msdn on glAreTexturesResident()…

“…If textures reside in virtual memory (there is no texture memory), they are considered always resident.”

damn…

edit: what i mean is, this prolly applies to textures held in agp memory as well…

[This message has been edited by mattc (edited 02-25-2003).]

On my 64MB GeForce3, I can have 55MB of resident textures, so that sounds perfect. Granted, however, the function’s behaviour isn’t well-specified enough to guarantee useful results on all cards/drivers. Has anyone tested it on a Radeon?

– Tom

[This message has been edited by Tom Nuydens (edited 02-25-2003).]

@all:

are you sure, that there is no way ?
i believe that there is a small program on the nVidia webpage; i don’t know exactly the name - unfortunately, this program cames without source - i think that this program shows the memory;

Originally posted by mattc:
edit: what i mean is, this prolly applies to textures held in agp memory as well…

I used it and it only applied to textures held in the memory on the graphics card (I tested it on an nVidia card).

Tim.

Hmm. I just checked on a Parhelia and it always returns GL_FALSE

– Tom

cheers everyone going by my past experience with matrox drivers, they could easily have small bugs in surprising places - sounds to me like it’s worth bringing this one up with the driver dev team…

here is the URL to the above mentioned program:
http://developer.nvidia.com/view.asp?IO=agp_memoryapp

djsnow, i appreciate the link but there’s no source and anyhow, it’s not agp memory that i’m interested in, it’s the texture memory on the card…

You could use DirectX to query the amount of memory available.
If you have the DirectX SDK installed, you have a programm called CapsViewer and this shows under DirectDraw Devices the amount of available local, non-local and texture memory. This should give you at least a rough estimate (16mb,32mb,64mb or 128mb card).

Of course this not possibilty for non windows applications, but there are other ways getting the amount of available memory.

Lars

i could do that on the pc, but generally it’s not a good idea to have d3d and gl running at the same time, plus there are portability issues (ie how to do it on other platforms)…

>>You could use DirectX to query the amount of memory available<<

wether the result u get back is anywhere near to accurate is another question
ie dont use it