max 3d texture size & texture mem

Hi all,

I’m using large 3D textures in my application.

Trying to find the max texture size is not that hard:

glGetIntegerv(GL_MAX_3D_TEXTURE_SIZE, &params);

Trying again to check the creation of the texture with GL_PROXY_TEXTURE_3D is not that hard either using params from the previous call as startCheckSize:

int TextureManager::GetRealMax3DTextureSize(int startCheckSize)
{
	static int g_max3DTexSize = -1;

	if(g_max3DTexSize  != -1)
	{
		return g_max3DTexSize;
	}

	//	checking for:
	//	m_internalFormat = GL_LUMINANCE16_ALPHA16;
	//	m_format = GL_LUMINANCE_ALPHA;
	//	m_type = GL_UNSIGNED_SHORT;

	GLint level = 0;
	GLint internalFormat = GL_LUMINANCE16_ALPHA16;
	int width = startCheckSize;
	int height = startCheckSize;
	int depth = startCheckSize;
	GLint border = 0;
	GLenum format = GL_LUMINANCE_ALPHA;
	GLenum type = GL_UNSIGNED_SHORT;

	glTexImage3D(GL_PROXY_TEXTURE_3D, level, internalFormat, width, height, depth, border, format, type, NULL); 

	GLint testWidth = 0;
	GLint testHeight = 0;
	GLint testDepth = 0;

	glGetTexLevelParameteriv(GL_PROXY_TEXTURE_3D, 0, GL_TEXTURE_WIDTH, &testWidth);   
	glGetTexLevelParameteriv(GL_PROXY_TEXTURE_3D, 0, GL_TEXTURE_HEIGHT, &testHeight);   
	glGetTexLevelParameteriv(GL_PROXY_TEXTURE_3D, 0, GL_TEXTURE_DEPTH, &testDepth);
	
	if(testWidth == 0 || testHeight == 0 || testDepth == 0)
	{
		testWidth = GetRealMax3DTextureSize(startCheckSize/2);
	}

	g_max3DTexSize = testWidth;
	return g_max3DTexSize;
}

But when I actually try to use the texture, I get an out of memory exception. Which is correct since I am trying to allocate 512 Mb of texture mem on the video card with only 256 Mb.
So I would like to know in advance if I can actually allocate the texture on the videocard before using it. Is this possible?

Regards,

Ronald

No, because OpenGL is an hardware abstraction where the actually used hardware does not matter. At least this argumentation has been used by the ARB to shot down any requests for that kind of information.

But some vendor(s) actually seem to have some brains:

http://www.opengl.org/registry/specs/ATI/meminfo.txt

I’ve posted the same problem as you in the past and we concluded that you can not trust proxy textures. Specifically, it does not take the internal format into consideration (on some implementations). It is therefore effectively useless for this purpose.

What the others have said…

However, depending on your platform there are ways to expose hooks to query the available VRAM. (If you are on OS X I can send you the API info. I am not sure on others.)

That is about the only way to do this reliably, other than actually keeping your own running total. And even that last method is flawed, because you may not be taking into account all the things the driver is.

@ScottManDeath & Nicolai de Haan Brøgger. Thanks for that info.
@scratt I’m running Windows Vista64.

I come to the conclusion of creating the texture, and to test it by drawing a trianlge with the texture on it. Check glError(). If fails use a smaller texture. Recursive call until it works. Well you get the drift. :slight_smile:

Thanks all

The proxy texture seemed a good idea, too bad it does not work as advertised.
What is your video card by the way ?

7600 GT with 256 Mb

Which is not the whole truth, because AGP and PCI-Express boards also have additional GPU accessible memory available.
On the old AGP boards that what you set inside the system BIOS with AGP aperture to up to 256 MB extra. That memory is not accessible by the host anymore (you can see the reduced size in the TaskManager).
On current PCI-Express boards there could be another 1GB or even more made accessible to the GPU.

Means it should be theoretically possible to download a 512 MB texture on a system with only 256 MB video RAM if you get a chunk of contiguous PCI-Express memory. And that’s what normally fails due to memory fragmentation.

ok, so somewhere they (opengl board) are right not checking mem on video card.
Anyway I got it working with the creation & checking for glError().

Or you can use the D3D/DXGI APIs, as demonstrated here:

http://developer.download.nvidia.com/SDK/9.5/Samples/gpgpu_samples.html#GetGPUAndSystemInfo

http://www.opengl.org/registry/specs/ATI/meminfo.txt

is something similar for NVIDIA available?

nVidia don’t have anything - so ATI are way ahead here.
With standard GL you do have the ability to use textureProxy which could be used to upload say 64 * 1MB ‘proxy’ textures to test for say 64MB texture allocation. It’s cluncky I know and something I have stayed away from since GL has historically always tried to abstract the h/w.

Have a look at NVAPI.

I could need an OpenGL call yielding the total video RAM a graphics card has onboard. That would allow me to estimate whether my application would need too much texture memory and have it reduce texture size instead of causing the driver to swap textures.

Surely you can get the total VRAM from one of the OpenGL strings… Or at least estimate it based on the GPU’s ID. Better than nothing.

I am also pretty sure that NVidia must have some kind of API access that allows interrogation of the RAM situation as both gDEBugger and Apple’s own OpenGL utilities offer the same functionality for both ATI and NVidia - which includes a lot of memory breakdown info.

Which string might that be, of all the many?

There is no string to query this in GL.

There are implementation-specific ways to query it though. For example at the window system level-- Apple provides CGL API for this.

Thanks. I am not going to code a lot of OS and hw vendor specific video RAM detection though, that’s too much of an effort for my app.