iobus
02-09-2011, 08:42 AM
I am seeing an issue where GL calls that happen in parallel are causing a noticeable performance hit in my application. I have two threads, each with their own GL context that are rendering to two different windows. Under older NVidia drivers, these two renderers can render at full tilt with no problems. Now it seems that if the CPU attempts to execute two GL calls simultaneously, it takes a huge performance hit. I can confirm this by putting a static mutex around every GL call, thus preventing the CPU from making concurrent GL calls. With these mutexes in place, the application runs fine. I really don't want to have to wrap every GL call with a mutex, and feel like the driver should be taking care of concurrency and scheduling of GL calls. It seems odd that something this simple would all of sudden be implemented wrong in the driver.
This problem arose when I switched from driver version 181.22 to the latest drivers (260.99 at the time), but this problem is present all the way back to 190/191 drivers.
I am running Windows XP with 9500GT GPU.
Does any one have an insight into this problem, or is anyone else experiencing this.
Thanks,
-Kyle
This problem arose when I switched from driver version 181.22 to the latest drivers (260.99 at the time), but this problem is present all the way back to 190/191 drivers.
I am running Windows XP with 9500GT GPU.
Does any one have an insight into this problem, or is anyone else experiencing this.
Thanks,
-Kyle