glTexImage2D doesn’t report error

Hello,
I work with OpenGL 3.3.
I try to create an OpenGL texture with a size defined by the user. After calling glTexImage2D I call glGetError. If there is an error I give a message to the user and ask him to create a smaller texture.

It works fine on a Linux (32 bits) with an Nvidia GeForce GTX 580 video card.
When I try this on Macintosh (OSX 10.9.2) with an ATI Radeon HD 5670 512 Mo the error catching doesn’t work:
If the texture is small it works, I can create texture and have good render.
But if the texture is too big, the glGetError returns no error; my render is grey or indeterminate. Sometimes the error GL_OUT_OF_MEMORY is detected during the swap buffer (long time after the glTexImage2D and randomly).
After its creation the width of the texture returned by glGetTexLevelParameteriv is correct.

Have you got any ideas ?

Thanks in advance.

I don’t think you can rely on getting this error. Section 2.5 (GL Errors) in the OpenGL 3.3 spec says: “If memory is exhausted as a side effect of the execution of a command, the error OUT_OF_MEMORY may be generated.” Notice the may in that language. Also, the spec for glTexImage2D (https://www.opengl.org/sdk/docs/man3/xhtml/glTexImage2D.xml) does not specifically mention GL_OUT_OF_MEMORY as a possible error.

Dealing with out of memory conditions in OpenGL is a tricky topic. Memory associated with the texture may be allocated long after your glTexImage2D call returns. Picture a system with VRAM. The VRAM copy of the texture may not be allocated until the texture is used for rendering. If not enough VRAM can be made available, and the system does not implement texturing from system memory when VRAM runs out, your “texture allocation” could fail while the draw call is processed (which, due to the asynchronous nature of OpenGL, can be long after your glDraw* call returns). It could also work initially, get evicted from VRAM later, and then the new VRAM allocation fail if the texture is used again later. In other words, allocations related to your texture can fail almost any time the texture is used.

How to best deal with this as an application programmer is an interesting discussion topic. IMHO, the only really safe approach is that you test your application on certain configurations, list them as supported configurations, and warn your users that your app is not guaranteed to work on anything that is not tested. But I would be happy to hear other points of view.

Thank you for your answer.
Indeed, I havn’t notice that the OUT_OF_MEMORY error isn’t mandatory.
I understand that the error can appear at any time, I could deal with it. I could test OpenGL error at each frame, and if there is an error I can inform user.
But generally there is no error at all. The render is invalid but OpenGL don’t throw any error. In this case I don’t know what I can do.

The idea of testing configuration isn’t secure in my application because it could be use on different computer. The capacity is different on different computer, and I can’t test all configurations. I must choose a small size that will run on all configurations.

I think it is strange that OpenGL can’t say that he can’t draw. Do you think I have misunderstood something?

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.