nv: debug output performance messages

Hi,
since the ARB_debug_output extension finally reports something on nvidia drivers (r285.38) i took a closer look.

In my current project i get several of the following messages:


<source: api, type: performance, severity: medium> Program/shader state performance warning: Fragment Shader is going to be recompiled because the shader key based on GL state mismatches.
        GL_CLAMP_VERTEX_COLOR_ARB is clamped.
        GL_CLAMP_FRAGMENT_COLOR_ARB is clamped.
        There are 0 constants bound by this program.
        Program references wpos.

This was generated on a OpenGL 4.2 core context where the constants GL_CLAMP_VERTEX_COLOR_ARB and GL_CLAMP_FRAGMENT_COLOR_ARB are deprecated. I am wondering why this message is generated, why the fragment shader is recompiled and what OpenGL state is influencing this behavior?

Regards
-chris

CLAMP_VERTEX_COLOR + CLAMP_FRAGMENT_COLOR are deprecated, but if GL_ARB_color_buffer_float is available, then it will still be valid to use GL_CLAMP_VERTEX_COLOR_ARB + GL_CLAMP_FRAGMENT_COLOR_ARB.

You can control whether colors are clamped to the range 0.0 to 1.0 using:


glClampColorARB(GL_CLAMP_VERTEX_COLOR_ARB/GL_CLAMP_FRAGMENT_COLOR_ARB , GL_TRUE/GL_FALSE/GL_FIXED_ONLY_ARB);

The initial values are GL_TRUE for GL_CLAMP_VERTEX_COLOR_ARB, and GL_FIXED_ONLY_ARB for GL_CLAMP_FRAGMENT_COLOR_ARB. GL_FIXED_ONLY_ARB applies clamping only to fixed point color buffers.

Why you get the performance warning I’m not sure. Are you writing to a fixed point, or floating point color buffer? maybe try:


glClampColorARB(GL_CLAMP_FRAGMENT_COLOR_ARB, GL_FALSE);

or if they have the wrong initial value for GL_CLAMP_FRAGMENT_COLOR_ARB, then this could work:


glClampColorARB(GL_CLAMP_FRAGMENT_COLOR_ARB, GL_FIXED_ONLY_ARB);

If it told you how to fix it, or the value expected, it would be more helpful.

Thanks for the answer Dan.

This might be a problem when this extension is interfering with the core profile context behavior, even if it is not hindering functionality but performance. Why are the shaders compiled against a state that only can exist in compatibility profile contexts (glClampColor)?

Just because it’s removed from the core profile doesn’t mean that an extension can’t add it back to the core profile - which it does if GL_ARB_color_buffer_float is supported.

Maybe there should be a way to specify to OpenGL at context creation time which extensions are desired/not wanted. It could perhaps boost performance too, if the implementation knows that half the functionality(+enums) provided by the various extensions are never going to be used.