Trying to find the max texture size is not that hard:
glGetIntegerv(GL_MAX_3D_TEXTURE_SIZE, ¶ms);
Trying again to check the creation of the texture with GL_PROXY_TEXTURE_3D is not that hard either using params from the previous call as startCheckSize:
But when I actually try to use the texture, I get an out of memory exception. Which is correct since I am trying to allocate 512 Mb of texture mem on the video card with only 256 Mb.
So I would like to know in advance if I can actually allocate the texture on the videocard before using it. Is this possible?
No, because OpenGL is an hardware abstraction where the actually used hardware does not matter. At least this argumentation has been used by the ARB to shot down any requests for that kind of information.
But some vendor(s) actually seem to have some brains:
I’ve posted the same problem as you in the past and we concluded that you can not trust proxy textures. Specifically, it does not take the internal format into consideration (on some implementations). It is therefore effectively useless for this purpose.
However, depending on your platform there are ways to expose hooks to query the available VRAM. (If you are on OS X I can send you the API info. I am not sure on others.)
That is about the only way to do this reliably, other than actually keeping your own running total. And even that last method is flawed, because you may not be taking into account all the things the driver is.
@ScottManDeath & Nicolai de Haan Brøgger. Thanks for that info. @scratt I’m running Windows Vista64.
I come to the conclusion of creating the texture, and to test it by drawing a trianlge with the texture on it. Check glError(). If fails use a smaller texture. Recursive call until it works. Well you get the drift.
Which is not the whole truth, because AGP and PCI-Express boards also have additional GPU accessible memory available.
On the old AGP boards that what you set inside the system BIOS with AGP aperture to up to 256 MB extra. That memory is not accessible by the host anymore (you can see the reduced size in the TaskManager).
On current PCI-Express boards there could be another 1GB or even more made accessible to the GPU.
Means it should be theoretically possible to download a 512 MB texture on a system with only 256 MB video RAM if you get a chunk of contiguous PCI-Express memory. And that’s what normally fails due to memory fragmentation.
nVidia don’t have anything - so ATI are way ahead here.
With standard GL you do have the ability to use textureProxy which could be used to upload say 64 * 1MB ‘proxy’ textures to test for say 64MB texture allocation. It’s cluncky I know and something I have stayed away from since GL has historically always tried to abstract the h/w.
I could need an OpenGL call yielding the total video RAM a graphics card has onboard. That would allow me to estimate whether my application would need too much texture memory and have it reduce texture size instead of causing the driver to swap textures.
Surely you can get the total VRAM from one of the OpenGL strings… Or at least estimate it based on the GPU’s ID. Better than nothing.
I am also pretty sure that NVidia must have some kind of API access that allows interrogation of the RAM situation as both gDEBugger and Apple’s own OpenGL utilities offer the same functionality for both ATI and NVidia - which includes a lot of memory breakdown info.