PDA

View Full Version : Texture memory management



amitbd
09-25-2004, 08:58 PM
Hello,

Maybe one of you encountered a problem using NVIDIA cards that with each loading of texture the driver(assumed) allocates as double the memory from the CPU RAM. i suppose this is some virtual memory mechanism which allows the use of more textures than the GPU resident memory, but why twice the size - one is more that enough for my opinion.

I am using the sentiris card (64MB memory) using 40 MB of textures - meaning that 80MB are also allocated from my RAM. I am working in a very memory limited hardware and all allocation made is critical.

If any of you know how to disable this virtual memory thingy or maybe reduce the size of the allocations per texture - i would be grateful.

Amit :confused:

Robert Osfield
09-26-2004, 02:24 AM
Hi Amit,

I believe most OpenGL graphics will result in 3 copies of your imagery, 2 in main memory, 1 on the graphics card once it gets download.

The two in main memory will be from one copy which you application itself has and uses to set up the OpenGL texture with. The second copy in main memory will be the OpenGL driver's texture objects that it creates. This OpenGL drivers texture object will be required in main memory so that the driver can swap in/out textures down on the graphics card on demand without needing an callback mechansim to your application for reserving the source imagery.

To cut down on memory consumption what you can do is delete your applications orginal images once they have been passed down to OpenGL. If you do this delete after each call to set up the texture object then you'll keep the over head down.

Another thing you could do is to dynamically manage textures in your application, such that you have a current working set of textures that you populate from you applictions source imagery.

A further extension of this approach is do paging from disk on demand, this way you can start browsing gigabyte datasets even with a small footprint in main memory.

Robert.


Originally posted by amitbd:
Hello,

Maybe one of you encountered a problem using NVIDIA cards that with each loading of texture the driver(assumed) allocates as double the memory from the CPU RAM. i suppose this is some virtual memory mechanism which allows the use of more textures than the GPU resident memory, but why twice the size - one is more that enough for my opinion.

I am using the sentiris card (64MB memory) using 40 MB of textures - meaning that 80MB are also allocated from my RAM. I am working in a very memory limited hardware and all allocation made is critical.

If any of you know how to disable this virtual memory thingy or maybe reduce the size of the allocations per texture - i would be grateful.

Amit :confused:

arekkusu
09-26-2004, 02:56 AM
Some GL implementations have an extension (http://oss.sgi.com/projects/ogl-sample/registry/APPLE/client_storage.txt) which lets you avoid the second copy in system memory, or an extension (http://developer.apple.com/graphicsimaging/opengl/extensions/apple_texture_range.html) to map the first copy into AGP and optionally texture directly from it, avoiding the copy to VRAM.

amitbd
09-27-2004, 10:17 AM
Thanks for the short reply to both of you,

As you said Robert the OpenGL virtual texture mechanism should allocate mostly one copy of the texture in main memory - not counting the original memory allocated by me. However when checking the memory allocated for the process/task when using the load command you can see twice the size of the texture being allocated - which is not understandable. You are welcome to check it out yourself.

I already have a texture management mechanism which keeps only a handful of textures resident. And another cache management from the disk. which are helpfull but could have been more utilized if i had less memory allocations from OpenGL.

Arekkusu,
About directing the texture to AGP - i only have PCI (PMC) - PPC VME Board (tough life programming OpenGL for RT ;) ).

About the extension you mentioned which disables the main memory copy, i would be glad if you can direct me to it.

Other suggestions will be welcomed.

Regards,
Amit. :cool:

arekkusu
09-27-2004, 01:35 PM
My above post links to the spec for Apple's extension implemented on Mac OS X. If you're on custom hardware you'll have to see if a similar extension is available. Maybe not, but such a feature is possible.

You can also look at reducing color depth (packed 565 or 332 RGB formats) or using compressed texture formats like s3tc. Obviously at some reduction in quality.

jwatte
09-27-2004, 01:42 PM
It's not clear what that data is. Perhaps it allocates as much as it can out of AGP, and then also system memory, and it will keep going with system memory only once it's out of AGP? Perhaps you're uploading 16 bit textures, and it's converting to 32? Perhaps it's management overhead, if you have lots of small textures?

If you allocate only a single texture, does the 2x observation still hold? What if you allocate each texture 4 times?

amitbd
09-27-2004, 09:43 PM
Hi,

Arrekusu, sorry i missed the link to the extension in your first reply :eek: . I would try looking for a similar extension in my hardware. However to my understanding this wont go together with the use of sub copying to a loaded texture.
I am already using 332 internal format

Jwatt, i am using 8 bit data - so there is no need for packing to 32.
i have also tried working with RGB internal format and my observation were still x2.
The size of my textures is mostly 2kX2k 8 bit and for every allocation the observation is x2.

Thanks again,
Amit
:rolleyes:

arekkusu
09-28-2004, 01:17 AM
If you're using 332, it's possible that your video hardware doesn't support that internal format and is upsampling it to 565 behind your back, which would explain the 2x. Last I checked, ATI cards support 332 and nvidia cards don't. However, nvidia cards do support an 8 bit paletted format (ATI cards don't) which you could use to accomplish the same packing.

Also, client_storage doesn't interfere with sub copying, FWIW.