Our Application UI hangs on exit, and the call stack obtained points to wglDeleteContext.
This issue is seen while using hardware rendering (NVIDIA Driver OpenGL 4.1.0 ).
When software rendering (OpenGL 1.1.0) is used, application exit is smooth.
What could be the cause for hang on deleting the rendering context.?
Well the first thing I notice is that you’re calling glFinish() after you drop your GL context (wglMakeCurrent(0,0)). You need to swap those two.
The second thing I see is that your NVidia GL driver is trying to present buffers (e.g. process a swap buffers) inside of the context deletion. It could be when you delete a context it tries to “catch up” on all the queued work for that context that hasn’t been executed yet. Or it could be that the glFinish() might have actually been queued (undefined behavior) and is triggering that. Not sure.
In any case, what I’d suggest to try and fix that is to call glFinish() before dropping your context (wglMakeCurrent(0,0)), which is what I suggested above.
A probable cause is cleaning up GPU-allocated objects such as textures, buffers, etc. Since you say you’re also using the software GL 1.1 implementation I’m going to assume that you haven’t created any buffer objects, but I would be interested in knowing how many textures you have created, and if you’ve created any display lists (and if so, how many).
I know you say that you have a “delete the texture” operation in there, but glDeleteTextures is not actually specified to free memory: all that it is required to do is make the texture name(s) available for reuse by a subsequent call to glGenTextures. A previous time I saw similar behaviour it was caused by erroneously creating new textures (or other objects) every frame but never destroying them, resulting in a huge number of objects to actually be destroyed during context shutdown.
The other thing that occurs to me - you mentioned this happens in a destructor, so I am assuming that you’re using C++ - are you absolutely certain about when your destructors run (and the order they run in)?
Also, you might check for GL errors while your GL context is current, and also check for failure from your wglMakeCurrent() calls. That is, make sure everything looks ok up until your wglDeleteContext call.
sleep won’t guarantee that you’ll have the same behavior on all the machines your program will run on. It might also stop to work on that machine for some other reasons.
I had a similar issue. This was related to FBOs not well used (read and write buffers were still bound to deleted FBOs). So, I would more go for something as mhagain told. So, it might be time for you to tell here what do you do in your program.
Silence already caught this. What I was suggesting was something like this:
#include <assert.h>
...
BOOL res = wglMakeCurrent( hdc , hglrc)
assert( res );
delete the texture
assert( glGetError() == GL_NO_ERROR );
glFinish();
assert( glGetError() == GL_NO_ERROR );
res = wglMakeCurrent( 0, 0 );
assert( res );
res = wglDeleteContext( hglrc);
assert( res );
…in the meantime, wgldeletecontext does the unfinished jobs. …
… When sleep is added before context deletion, issue does not occur.
::glFinish();
Sleep(1000);
This in combination with your statement that glFinish() alone doesn’t cut it suggests to me that the driver may not be waiting until pending work is complete to return from glFinish(). That’s concerning.
However, why are you making your GL context current right before deleting a texture? What was it not already active? Are you doing something with swapping GL contexts on the same thread, or sharing the same context across multiple threads? If so, it’s more likely that something you’re doing with GL across multiple threads may be causing this problem.