PDA

View Full Version : Radeon Problem.



Lehm
06-01-2005, 04:48 PM
I have a vertex/fragment pair that compile fine on Geforces, but will not compile on a Radeon. I've tried it on a 9600 and an x800.

There's some commented out stuff in there, but I thought it best to leave it just as it is.



varying vec4 eyePos;

void main(void)
{
gl_TexCoord[0] = gl_MultiTexCoord0;
gl_TexCoord[1] = gl_MultiTexCoord1;
gl_TexCoord[2] = gl_MultiTexCoord2;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
//gl_TexCoord[0] = gl_MultiTexCoord0;
gl_FrontColor = gl_Color;
//gl_FrontColor = vec4(1.0, 0.0, 0.0, 1.0); // Hard-code red for testing purposes

//Fog Stuff
eyePos = gl_ModelViewProjectionMatrix * gl_Vertex;
gl_FogFragCoord = abs(eyePos.z/eyePos.w);
}

uniform sampler2D bTexture,tTexture,lTexture;

varying float maxC;
varying vec4 tColor1,tColor2,lColor;



void main(void)
{

//gl_FragColor = vec4 (0.5,1.0,1.0, 1.0) * gl_Color;

//gl_FragColor = vec4(inColor,1.0) ;

//Percent = texture2D(tTexture, vec2(gl_TexCoord[1])).a;
tColor1 = texture2D(bTexture, vec2(gl_TexCoord[0]));
tColor2 = texture2D(tTexture, vec2(gl_TexCoord[1]));
//lColor = gl_Color;
//lColor = texture2D(lTexture, vec2(gl_TexCoord[2]));
lColor = texture2D(lTexture, vec2(gl_TexCoord[2]))+(gl_Color-0.5);
maxC=0;
if (lColor.r>maxC)
maxC=lColor.r;
else if (lColor.g>maxC)
maxC=lColor.g;
else if (lColor.b>maxC)
maxC=lColor.b;

if (maxC>1)
lColor = lColor/maxC;
gl_FragColor = vec4((vec3(tColor1) * ( 1.0 - tColor2.a )+ vec3(tColor2) * tColor2.a)*lColor,tColor1.a);
//gl_FragColor = gl_Color;
}

sqrt[-1]
06-01-2005, 08:17 PM
This should work: (Always spefic the .0 when you mean a float and you cannot write to varyings...)



uniform sampler2D bTexture,tTexture,lTexture;

float maxC;
vec4 tColor1,tColor2,lColor;



void main(void)
{

//gl_FragColor = vec4 (0.5,1.0,1.0, 1.0) * gl_Color;

//gl_FragColor = vec4(inColor,1.0) ;

//Percent = texture2D(tTexture, vec2(gl_TexCoord[1])).a;
tColor1 = texture2D(bTexture, vec2(gl_TexCoord[0]));
tColor2 = texture2D(tTexture, vec2(gl_TexCoord[1]));
//lColor = gl_Color;
//lColor = texture2D(lTexture, vec2(gl_TexCoord[2]));
lColor = texture2D(lTexture, vec2(gl_TexCoord[2]))+(gl_Color-0.5);
maxC=0.0;
if (lColor.r>maxC)
maxC=lColor.r;
else if (lColor.g>maxC)
maxC=lColor.g;
else if (lColor.b>maxC)
maxC=lColor.b;

if (maxC>1.0)
lColor = lColor/maxC;
gl_FragColor = vec4((vec3(tColor1) *
( 1.0 - tColor2.a ) +
vec3(tColor2) * tColor2.a)
*lColor.rgb,tColor1.a);
//gl_FragColor = gl_Color;
}

kingjosh
06-02-2005, 07:55 AM
To elaborate a bit on the previous post.

In OpenGL Shading Language, you cannot write to a varying variable in the fragment shader. You may write to a varying variable in the vertex shader, and it will be interpolated and the interpolated value passed to the fragment shader.

It is also in the OpenGL Shading Language specification that you must use a decimal when assigning a value to a float. For example, "1" is an integer, while "1.0" is a float. If cross platform shaders is your goal, ATI's compiler is much more strict and will give you a more portable shader. From what I see, you don't have a Radeon problem, but a GeForce problem as it is accepting ill-formed out of spec code.

If you wish to continue with an nVIDIA card and compiler, you may want to double check your code with GLSL Validate (http://developer.3dlabs.com/downloads/index.htm) , this is a simple parser that verifies shader code is to spec and therefore more likely to be able to be compiled on different video cards.

Lehm
06-02-2005, 09:18 AM
Yeah not having a Radeon kinda slows down debugging for it. So obviously I misinterpeted how to use varying.

Is the debugger in Shader Designer as good as GLSLvalidate? Cause I used that fix my shaders, before I was just doing it by notepad.

ScottManDeath
06-02-2005, 09:37 AM
You can also use nvemulate (http://developer.nvidia.com/object/nvemulate.html) to enable strict shader portability warning. This way, I found some non-portable statements in my shaders. You can also let the driver save the shader info logs, the combined shader source and the generated low level ARB_vp/vp shader into textfiles.

kingjosh
06-02-2005, 10:46 AM
GLSL Validate checks your shader syntax, it is not a debugger. It takes very little time to use, as it is quite simple. Even if you use nvemulate with the strict flag, it still is a good idea to run the shader through GLSL Validate as the warning messages are often lacking information.

Lehm
06-15-2005, 10:37 AM
I've got a new problem with Radeons. The code works properly now, but the performance is terrible. On my Geforce 6600 I get like 150fps. On a Radeon x800 I get like 20. Clearly there is an issue. I'm not even sure where to start looking for that problem.

olmeca
06-20-2005, 01:41 PM
The problem seems to be the conditionals. Try to just compute one path. If the time is ok, try to check every path (again without conditionals).

Conditionals are poorly (or not) supported on ATI's cards.

olmeca
06-20-2005, 01:46 PM
if (lColor.r>maxC)
maxC=lColor.r;
else if (lColor.g>maxC)
maxC=lColor.g;
else if (lColor.b>maxC)
maxC=lColor.b;can be quicker done that way:


maxC = max( lColor.r, lColor.g );
maxC = max( maxC, lColor.b );

olmeca
06-20-2005, 01:48 PM
if (maxC>1.0)
lColor = lColor/maxC;is quicker that way:


maxC = max(maxC, 1.0);
lColor = lColor/maxC;

olmeca
06-20-2005, 01:50 PM
Avoid using conditionals. Always prefer built-in function like max,min,...like I showed you before.

happy coding!

Lehm
06-27-2005, 10:22 AM
tColor1 = texture2D(bTexture, vec2(gl_TexCoord[0]));
tColor2 = texture2D(tTexture, vec2(gl_TexCoord[1]));

lColor = texture2D(lTexture, vec2(gl_TexCoord[2]))+(gl_Color-0.5);These lines appear to be the source of the problem. I'm not sure why just yet.