Texture proxy vs just allocating it and check for OOM condition?

I’m curious to hear if any one has an opinion on this.

I’m used to c/c++ programming when you allocate ram you can check for failure (either check the malloc return or catch std::bad_alloc), and handle failure if it occurs. OpenGL has the texture proxy which tells if you were to allocate a texture of a given size, would it succeed. Given that modern GPU have gigs of ram, how often to texture allocation fail because of (out-of-memory)OOM condition? I’d think that now days it’s rare and you’d be better off skipping the proxy texture query and simply allocating the texture checking glGetEeror after glTexImage2D for the OOM condition. Thoughts? Opinions? Comments?

Thanks!
B

The texture proxy mechanism is next to useless in modern GL. It tells you if a single texture object can be created. It does not tell you if that texture object can be successfully used in conjunction with every other resource required for a given draw call (i.e. 79 other bound textures, the bound framebuffer and its attachments, the bound vertex buffers, uniform buffers, shaders, etc etc.)

arekkusu already addressed the first part of your question. As to the last part (above), you won’t get a GL error on a “GPU” OOM condition. The reason is that OpenGL virtualizes the memory on the GPU. When it runs out, it just spills over into CPU memory, and the driver furiously tries to keep what you need on the board. While it’s shuffling texture data back-and-forth between the CPU and GPU to try and satisfy your batches’ needs, your performance drops.

You are better off detecting the amount of memory on your GPU, and limiting what you allocate on the GPU so that it should fit (assuming there’s not some other GL application that’s eating a bunch of GPU memory too, of course). NVX_gpu_memory_info and ATI_meminfo can be useful here.

The texture proxy will not tell you if there’s enough memory to allocate a texture of that size, it checks only that the driver is generally capable of working with a texture of the given size and format. So you have to check for OOM errors anyway.

Thanks to all posters so far! you’ve cleared up some of my confusion about the proxy query and it’s purpose. Very helpful.

from the TexImage doc

GL_INVALID_VALUE is generated if width is less than 0 or greater than GL_MAX_TEXTURE_SIZE.

it sounds like the TexImage could succeed even if the texture can’t be handled by the GPU! And thus the purpose of the proxy texture… what a pain in the neck…

Certainly. For example:

a) Make a 256MB texture. Draw with it.
b) Make 50 textures like that. Bind and draw one at a time.
c) Bind all 50 simultaneously (i.e. 10 in the vertex shader, 10 in control / eval / geom / frag shader) and draw.

The simple case a) can throw OUT_OF_MEMORY if either the CPU or the GPU can’t allocate or address 256MB. Which is a rare condition today (but think back to a Rage128 with 8 MB of VRAM. Or perhaps more applicably, the first iPhone.)

b) needs 12 gigabytes of memory. In a 32-bit application, this will always fail. But in a 64-bit application / OS, it will generally work if a) worked, because memory is virtualized and only the hot “working set” needs to be paged in for each draw call. It will be slow due to paging, but probably work.

c) will generally fail on today’s devices (unless you happen to have > 12 GB of VRAM…) To work, the GPU would have to have fully robust virtual memory, including talking to the OS VM to page from disk on a page fault.

The same example can be made with buffer objects; one giant array of vertex attributes can work, but then bind all 16 (plus an element array, plus transform feedback buffers, plus indirect buffer, etc etc.) And there’s no proxy mechanism for buffers, all you can do is check for OUT_OF_MEMORY.

Texture proxies might have made sense in 1992 when there was only one texture unit. It does not make any sense today.

Also consider that other GL object types don’t have proxies, which should drop a hint that the proxy mechanism is of quite limited value.

The correct way to do this is, of course, to figure out what target hardware you want to aim for (and don’t even think about saying “all hardware” as that’s completely unrealistic), do some research to figure the video RAM commonly available, and specify your art resources to fit in that target.

glTexImage() with a proxy will succeed unless the parameters are clearly invalid. To determine whether the described texture is supported, you need to query the texture dimensions with glGetTexLevelParameter(); if the texture isn’t supported, the dimensions (and everything else) will be zero. From the docs:

If target is GL_PROXY_TEXTURE_2D or GL_PROXY_TEXTURE_CUBE_MAP, no data is read from data, but all of the texture image state is recalculated, checked for consistency, and checked against the implementation’s capabilities. If the implementation cannot handle a texture of the requested texture size, it sets all of the image state to 0, but does not generate an error (see glGetError). To query for an entire mipmap array, use an image array level greater than or equal to 1.