A suggestion that may solve both your problem and mine:
It would be helpful if there was a cross-platform way via OpenGL to just tell the GPU to give you max performance (i.e. disable power management).
I think this would solve your problem too, without having to expose specific clock freqs, power state, and etc. Just a hint (GL_MAX_PERFORMANCE_HINT) that says “stop screwing with my clocks and give me full GPU power right now!”.
…
I say this, having just spend/wasted a few hours at work the last 2 days trying to figure out a whacky frame rate hickup. Every 45 seconds the app would break frame for ~0.5 seconds, and then go back to making frame for the next 45 seconds. Eyepoint was fixed, same frustum every frame, no loading, nothing really changing at all that might cause this. Odd. This was NOT on a laptop, but on a rack-mount server system running GeForce 9800GTs.
Naturally, I presumed it was something we were doing (culling, drawing, message processing, fighting for the single-core processor (yeah, old system) with some background process, buggy ethernet driver locking strangely in the kernel, or even NVidia driver doing housekeeping every 45 seconds).
After ruling out our app, and ruling out background processes, GPU dynamic clock tweaking occurred to me. Yep, that’s it. Every 45 seconds the NVidia driver was throttling back the clocks from 550Mhz to 300Mhz, and then realizing after 0.5 sec that it couldn’t get away with it, and throttling them back up! NVidia calls this PowerMizer, and it is enabled by default. Of course that had to stop. This ain’t no laptop! We can’t break frame.
Now how to disable? Well, that comes down to OS-specific vendor-specific hacks to tell the driver to knock it off and give you full perf. Annoying.
In this case, you can hack the following cryptic NVidia-specific and OS-specific directive into the kernel module configuration file, /etc/modprobe.d/options, and bring down the X display server/unload/reload the nvidia kernel module/restart X server or reboot:
options nvidia NVreg_RegistryDwords="PerfLevelSrc=0x2222"
or arrange to have this NVidia-specific command run every time after bringing up the X server:
nvidia-settings -a "[gpu:0]/GPUPowerMizerMode=1"
These work, but are both vendor-specific and OS-specific.
So for OpenGL applications that either always want max performance (or selectively want it, for profiling), it sure would be nice to have a cross-vendor and cross-vendor way via OpenGL to say “GIVE ME MAX GPU PERFORMANCE!”.