NVidia 280.x+ and GeForce 4xx+

Anyone with a GeForce 4xx or 5xx (Fermi) card tried 280.x+? How about on Linux?

Getting odd tiled corruption and X server lockup, merely from starting the X server. Don’t even get to running my app. Can’t even switch to a vtty to kill it – need to log in remotely.

I have a Quadro 4000 that’s running fine with 280.13, Ubuntu 9.10 (kernel 2.6.32-33). I’m running metacity without desktop effects. Perhaps it’s a bad driver install, or just a GEForce issue?

Sounds familiar to a problem I had on Windows 7 with this driver rev. NV changed the dynamic GPU performance modes in the 280 drivers and it (in so far as it seems to have been the cause - see next para) resulted in lots of black screens after the “Starting Windows” logo animation; I could disable the driver and it would run fine and safe mode obviously worked. Rolling back to 275 prevented it from happening too.

This was reverted to the 275 behaviour in 285 and so far all seems well; no repeats of the behaviour although I’m keeping an eye on it.

Relevant extract from the 285 release notes:

With the Release 280 drivers, NVIDIA GPU clock speeds will increase more quickly in response to increased graphics demands. Conversely, with lower graphics use the GPU clock speed slows down more quickly, conserving as much power as possible.

In the Release 280 drivers, some users reported a noticeable fluctuation in clock speeds while engaging in various tasks on the PC. With the Release 285 drivers, adjustments have been made to reduce the sensitivity to levels similar to the R275 driver.

Ok, thanks for the feedback guys. Sounds like it might be worth trying with power mgmt disabled. Failing that, may just hold off a version and see if this gets fixed.

Totally disappointed with R285.62 on Win7 64-bit with GTX470! :frowning:

“Dynamic GPU Performance Mode” means 50-100% faster transition to lower performance state. Nothing spectacular. Instead of 15s now it lasts from 10s (first transition, from the highest P-State) to about 8s.

None of my “advanced” applications works with R285.62. The applications always block on glClear(). :frowning:

Also, I have a very strange message.
Source: OpenGL
Msg.Type: Performance
Msg. ID: 131218
Severity: Medium
Message: Program/shader state performance warning: Fragment Shader is going to be recompiled because the shader key based on GL state mismatches.

What does this mean?

I have found that debug_output synchronous mode actually blocks execution. If it is disabled, a console window is opened with the following message:

GL_CLAMP_VERTEX_COLOR_ARB is clamped.
GL_CLAMP_FRAGMENT_COLOR_ARB is clamped.
There are 0 constants bound by this program.
Texture 0 is inconsistent between its mipmap parameters and specified mipmap levels.

Also:
GL_CLAMP_VERTEX_COLOR_ARB is clamped.
GL_CLAMP_FRAGMENT_COLOR_ARB is clamped.
Texture 0 uses an 32 bit floating point format.
Texture 0 is bound to texture target GL_TEXTURE_2D.

It is the first time I have seen a console window with debugging information (something like GLExpert mode). The applications work correctly now, but with the debugging console. And the debug_output message stays (“Fragment Shader is going to be recompiled because the shader key based on GL state mismatches.”).

Those are simple performance warnings. If you don’t want them ignore them based on the message type or severity. Not all calls to the ARB_debug_output callback will be about errors.

Program/shader state performance warning: Fragment Shader is going to be recompiled because the shader key based on GL state mismatches.
This means that you compiled the shader (or linked it) with a different OpenGL state than when you actually used it.
NVIDIA like to optimise their shaders to run as fast as possible with the current OpenGL state (eg. eliminating code because a uniform is set to 0 or 1).
Therefore it has to be recompiled if the state changes in a way that will cause the optimised shader to give incorrect results.
If you setup the OpenGL state you need to run the shader before you compile/link it then you wont get this message.
I wouldn’t worry about it though, unless the recompilation is causing an obvious glitch in your animation that you want to get rid of.

Thank you Simon for clarifying things!

Is there any “confession” in the form of technical report or whitepaper from NVIDIA that they really do that? It seems too inefficient to recompile shaders with every state change. I’ll try to brutally enforce uniforms changing to see what messages I’ll get.

As a developer, I like to have a debug console, but it would be nice to have a way to switch it off on the users’ machines.

I’m not sure that I understand this hint correctly. How can I run the shader before being compiled/linked?

I’m not sure that I understand this hint correctly. How can I run the shader before being compiled/linked?

He’s saying to set up the state that the program will be run in before compiling and linking. That is, if you’re going to run a program when depth tests are enabled and the depth range is a certain value, then you set those before compiling/linking the program.

Now granted, I’d prefer to know exactly which state matters for this sort of thing, as setting all state beforehand can be a fairly significant burden if you want to load your programs early in the program’s execution.

I didn’t succeed to get rid of that performance error. :frowning:

The shader initialization code has been moved to the end of the setup (PrepareScene) procedure, but without success. What actually generates the message is the first call to glCreateProgram(). In that moment neither shaders were loaded nor compiled. The driver has no idea how the shaders look like at that moment. So, I guess this is some driver error, or I missed something. The same behavior is noticed on all 285.xx drivers for both XP 32-bit and Win7 64-bit with GTX470 and 9600GT cards.

I have another question unrelated to previous (but influenced by the previous experience). How can we control GLExpert? The console window belongs to GLExpert, since it doesn’t appear on Windows XP.

On some previous drivers (e.g. R266 no Vista), I’m trying to get GLExpert’s output, but without success. Calling functions that set global GLExpert mode succeed, but if I try to pass the callback function I get the following error NVAPI_INSTRUMENTATION_DISABLED.

Any suggestion?

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.