Wierd nvidia performance problem

Hi,

I got the wierdest thing ever. Every third time I run my app, it goes into slowdown mode and every 10-40th frame takes 150-600 ms to render (compared to 30-60). Most of the time, it runs just fine.

This is with GF7800 GTX, 93.71 on dual-dual-core machine.

One context, Five FBOs (depth-peeling), dual-view.

Has anyone seen anything like this?

i’ve noticed sth similar. when i run my app for the first time, the fps is around 70-80, when i shut it down and start it once again, it’s at 150-160… same card (a gt however), same drivers.

Strange. I know the guy in this post has some strange driver problems as well. The test app apparently behaves well when another GL program is running in the background.

http://www.opengl.org/discussion_boards/ubb/ultimatebb.php?ubb=get_topic;f=3;t=015093

Also, this post seem to have found something similar to mine, but much worse. It’s a bit old though so that problem may have been fixed:

http://www.opengl.org/discussion_boards/ubb/ultimatebb.php?ubb=get_topic;f=3;t=013830;p=1#000015

Similar behavior on GeForce 6600GT 128MB, same application doesn’t have such problem on 7800GT 256MB.
Single context, several FBO’s (small, large, with/without depth renderbuffer, RGBA8, RGB16F), single display.

Looks like memory swapping, but my game should fit into 128MB without any problems and it uses nearly all textures, VBO’s and FBO’s during single frame render, so even if that would be memory swapping it should occur more often than once per second.
I first noticed this on old version of my game. That version attached depth render buffer to every FBO (even if it was for pure 2D rendering). It also had bug that caused render buffers not to be destroyed when FBO is deleted. This also caused some other sideeffects on both GPU’s (screenshot: http://ks-media.ehost.pl/images/bug/bug01.png - 1,1 MB).
After cleaning everything up I have no problems on either machine, but it could be because cleaning up decreased memory requirements.

Ok. I don’t think I’m running out of memory. It’s a 256 mb card and we’re not exactly taxing it heavily.

The PerfSDK-driver didn’t reveal much either.

I wonder if this is related to the thread that the new nvidia drivers create. I’ve tried to disable it (using registry hacks etc) but it’s still created. (Visible in debugger).

Also, when running with VTune, performance is always good, so I can’t drill down and figure out what’s going on in detail (except to sprinkle profilers aroung all gl-calls in the scene-graph lib that we’re using).

Could this have something to do with nVidia’s dual core problems?

http://www.opengl.org/discussion_boards/ubb/ultimatebb.php?ubb=get_topic;f=3;t=015036

You might want to try setting the affinity and see if that helps.

Somewhat, but not completely. I at least never get the completely horrible case. It’s a bit hard to test due to the physics-loop being a high-priority thread. However, setting the affinity makes the nvidia thread go away.

I actually end up in three different performance categories:

  • smooth (35-50 hz)
  • semi-smooth (15-30 hz)
  • horrible (30 hz with cccasional 300-600 ms frame)

Could this have something to do with nVidia’s dual core problems?
I had this problem on Athlon XP (32-bit), so it’s not related to dual-core processors.

My application uses just one thread + portaudio library (so it’s second thread).

I seem to get quite good performance by starting the process with single-core affinity (thus disabling the nvidia thread) then enabling full affinity for all threads after window/gl-context creation.

It could be a workaround.

It does seem to work. I’ll stick with that until further notice. It was quite easy to add since I dynamically load the dll which eventually loads nvoglnt.dll, so I can keep the affinity to single CPU during that time by using SetProcessAffinityMask().

Take look at this link:
http://support.microsoft.com/?kbid=896256