uniform float multiply 2 exp X Bug Ati Radeon X1900XT / FBO/RenderToTexture

Hi,
can somebody help me to verify an bug in ati driver?

The problem is that setting a uniform float value in a GLSL shader produce random values if multiply this float width 2^x values and assign to a vec.

Examples:

uniform float = 1.0 (setting in C Code or in RenderMonkey as uniform variable, Code or RenderMonkey project see last lines)

/* this Code produce a black image!!! (Should be a red) test is (0.0, 0.0)!!!

  
uniform float testFloat;
void main(void)
{
   vec2 test = vec2(2.0*testFloat, 0.0);
   gl_FragColor.rg = test;
   gl_FragColor.ba = vec2(0.0, 0.0);
}

// this a black image (4.0 is 2^X)

 
uniform float testFloat;
void main(void)
{
   vec2 test = vec2(4.0*testFloat, 0.0);
   gl_FragColor.rg = test;
   gl_FragColor.ba = vec2(0.0, 0.0);
}

// this Code produce a red image (3.0 is no 2^X)

 
uniform float testFloat;
void main(void)
{
   vec2 test = vec2(3.0*testFloat, 0.0);
   gl_FragColor.rg = test;
   gl_FragColor.ba = vec2(0.0, 0.0);
}

// this produce red image too , OK

 
uniform float testFloat;
void main(void)
{
   gl_FragColor.r = 2.0 * testFloat;
   gl_FragColor.gba = vec3(0.0, 0.0, 0.0);
}

Tested width Radeon X1900, 9800 pro, 9600.
But only the X1900XT shows this bug.

Here is the Ati Render Monkey project:
http://www.fbihome.de/~dacri/UniformAtiGLSLBug.rfx

Here a Visual Studio 6.0 project (glut and glew included, 2,7 meg)
http://fbihome.de/~dacri/UniformAtiGLSLBug.rar

And here only the cpp file.
http://fbihome.de/~dacri/FBO_MRTTest.cpp

ATI X1800 XL also demonstrates this bug (I’ve tested it with your RenderMonkey project). It seems to be a common bug for the entire ATI X1K series…

Thanks for testing!

Seems to be a really serious bug.
Hasn’t to do with FBO or RenderToTexture.
Here is a very simple RenderMonkey project widthout textures or renderToTexture feature.
Simplest GLSL project makeable :smiley:

http://www.fbihome.de/~dacri/AtiBugEasy.rfx

Funny, this Code works:

 
uniform float fTest;
void main(void)
{
   vec2 test = vec2(2.0*fTest+0.001, 0.0);
   gl_FragColor.rg = test;

   gl_FragColor.ba = vec2(0.0, 0.0);

}
vec2 test = vec2(2.0*fTest, 0.0);<--- this is vec2(0.0, 0.0);

but this produce a black image:

  
vec2 test = vec2(2.0*fTest+0.0001, 0.0);  <--- is vec2(2.00001, 0.0);

Workaround: Adding a small fract value to all uniform floats before using it :smiley:

Funny too, the if statement sees the correct value in the vector.

This produce a red ball

  
void main(void)
{
      
   vec2 test = vec2(2.0*fTest, 0.0);
   if(test.x>1.0)
      gl_FragColor = vec4(1.0, 0.0, 0.0, 0.0);
   else
      gl_FragColor = vec4(0.0, 1.0, 0.0, 0.0);

}

The native instructions can multiply their outputs by 2, 4, 8, 1/2, 1/4, or 1/8 for free. I’ll bet the compiler screwed something up with that in your program because you were writing directly to the result color with such a multiply. (Maybe check that using these values is exactly when the problem occurs.) There are separate write masks for writing to a temp register or the final output, and it may be the case that the free multiply can’t be used for the final output, but the compiler used it anyway.

In my real prog i used the multiplied value to calculate the adress in a texture.

  
uniform sampler2D texUnit0;
uniform float pixWidth;

void main( void )
{
 vec4 texel0      	= texture2D(texUnit0, gl_TexCoord[0].xy + vec2(0.0 * pixWidth, 0.0));  
 vec4 texel1      	= texture2D(texUnit0, gl_TexCoord[0].xy + vec2(1.0 * pixWidth, 0.0));
 vec4 texel2      	= texture2D(texUnit0, gl_TexCoord[0].xy + vec2(2.0 * pixWidth, 0.0));
 vec4 texel3      	= texture2D(texUnit0, gl_TexCoord[0].xy + vec2(3.0 * pixWidth, 0.0));
 gl_Color		= vec4(texel0.r, texel1.r, texel2.r, texel3.r);
}

but color of texel2 was completely wrong (most cases 0)

Im sadly never learned native gpu programming, but sounds really like a problem with the free bitshift.
Fortunately Ati solved this problem with new catalyst 6.4 released today.
Now all examples runs as they should!

Originally posted by ChiiChan:

Im sadly never learned native gpu programming, but sounds really like a problem with the free bitshift.
Fortunately Ati solved this problem with new catalyst 6.4 released today.

You can only learn the native instructions by working for ATI or reverse-engineering the hell out of their chips. Since everything’s stored in floating-point, it’s really adding or subtracting 1, 2, or 3 from the exponent field.