Free video memory efficiently

I have a multi-process application and each process owns a very big 3D texture.Only one process is active at the same time while other process run in background and don’t need the graphic resouces. As the video memory is very limited, so I want to free texture memory owned by current process when switching to other process. The only way to free texture as I know is “glDeleteTextures”, but seems not so efficient as I switch processes very frequently.I also considered a global texture management among all processes, according to Sharing context between processes - OpenGL: Advanced Coding - Khronos Forums it’s almost impossible. Is there any other way to do this?

There is the GL_ARB_invalidate_subdata extension that was introduced this past summer. As it’s very new it will only be supported by newer drivers. You should be able to invalidate a 3D texture with that. However, you’d still likely have to upload the texture again when the context switches back to your app.

http://www.opengl.org/registry/specs/ARB/invalidate_subdata.txt

Out of curiosity, how big is “a very big 3D texture”? The GPU should swap out the other texture (or parts of it, perhaps) when it needs the space. Is there a specific reason you feel you need to manually intervene?

Using GL_ARB_invalidate_subdata doesn’t necessarily free the memory used by the texture. It just meant to invalidate its content.

My thought was that by invalidating the texture, it would be a more likely candidate to be thrown out of VRAM if the GPU ran into a memory shortage. Completely implementation dependent of course, but it seems like something reasonable to try.

Doesn’t orphaning the texture wit size == 0 also release whatever memory was previously allocated? At least I would expect implementations behave like that, though I’ve never read anything like it in the spec.

While implementations can potentially throw out the storage of a texture/buffer when calling invalidating functions or when “orphaning” them by passing a size of zero, it’s not something that is required by the spec, so personally, I, as a developer would not use techniques that rely on potential or event actual behavior of implementations if it is not guaranteed to behave the same on all implementations.

Perfect example is buffer orphaning (or renaming or whatever you want to call it) which can be easily done explicitly in the application by using multiple buffer objects, or a single buffer object with unsynchronized maps and sync objects.

[QUOTE=malexander;1247233]There is the GL_ARB_invalidate_subdata extension that was introduced this past summer. As it’s very new it will only be supported by newer drivers. You should be able to invalidate a 3D texture with that. However, you’d still likely have to upload the texture again when the context switches back to your app.

http://www.opengl.org/registry/specs/ARB/invalidate_subdata.txt

Out of curiosity, how big is “a very big 3D texture”? The GPU should swap out the other texture (or parts of it, perhaps) when it needs the space. Is there a specific reason you feel you need to manually intervene?[/QUOTE]

“very big” means hundreds of megabytes, if two processes exist simultaneously, video memory cannot afford.I don’t know how openGL handle this. Does it has a cache or something? As the data is very big, CPU memory cannot afford many processes at the same time either, therefore I have a CPU file-mapping.When switching to other process, current process would delete the memory and keep it in mapping files.As I have my own memory management, I don’t know whether opengl’s cache is necessary and efficient enough.

Have you look at buffer object streaming or AMD’s pinning (http://www.opengl.org/registry/specs/AMD/pinned_memory.txt) and if I remember from my reading nVidia have something similar