i had recently some user crash reports. These crashes occurred when the users loaded images, but only when “Threaded Optimizations” were turned “On” or “Auto”.
Is it possible that these “threaded” actions tend to crash if the application withdraws the current render context by calling
wglMakeCurrent(MXW_NULL,MXW_NULL)?
No. The Threaded Optimizations are still consistent with the OpenGL specification. I can run my own OpenGL code, and that of others (Quake series, C4 engine, etc) with those optimizations turned on.
If you can create a small test case that causes the crashing, then NVIDIA developer support would probably want to get that, so they can reproduce and fix the problem. However, it’s probably more likely that you’re doing something that is not actually to spec, and that’s causing the crash.
thats my problem: condensing what causes the problem. its 100% repeatable but really weird, because …
its too easy to be slightly offline in gl =) - for this reason i have 3(!) (glintercept,gdebugger,glexpert) independent opengl error checkers regularly running and they all stay calm. plus my own error checking macros. nevertheless it crashes. sigh.
all i see is
a) a crash in a thread not started by me ( gets created when i create a second gl window sharing the main context ) and calling lots of gl:
With GLIntercept, are you setting the “ThreadChecking” option to True?
Also, if it is not crashing with GLIntercept, you can “daisy-chain” GLIntercept against itself so you can see what OpenGL calls it is making internally.
To do this, create a directory and copy the GLIntercept dll’s into it. (Eg. c:\Temp\Testing\ ) Set this gliConfig.ini to text logging and logging from start.
Next, on the main gliConfig,ini in your project set:
GLSystemLib = “c:\Temp\Testing\OpenGL32.dll”
This way the GLIntercept in the “Testing” folder will record all the OpenGL calls made by the first GLIntercept dll.
(If you are dumping OpenGL calls in the main app, you can open the output logs in a “diff” program like windiff to see what the differences are.)
Then, once you know what additional calls are being made, you can start inserting them into your app until it stops crashing. (hopefully - unless it is a thread sysnc bug)
i create a second gl window sharing the main context
So, that’s a clue! If you instead create a second context, not sharing lists, does it crash? Do the two windows share the same window message queue/thread? I assume the crash goes away when you un-check the “Threaded Optimization” option in the NVIDIA control panel?
If your project can be packed up in a zip file and installed somewhere else, you might want to send it as-is (without simplification) to the NVIDIA devrel people. They’ve seen lots of weird test graphics things and half-baked renderers (including mine), so don’t feel ashamed If they can reproduce the crash, they can fix it – or get back to you, telling you what you’re doing wrong.
after some testing and searching i found that this advice:
You can customize the nvidia thread by setting the process affinity to 1 CPU when you create the window & opengl context. Restore it afterwards (use the sysmask)
Yes the issue in the quoted thread is a driver bug that appears on dual core systems with Geforce graphics. But there, it’s “only” a weird performance thing. If your application crashes, it seems to be a new issue.
Anyway, if you can reproduce it, please send it to nVidia! Maybe the two problems are related to each other.