PDA

View Full Version : gl_LightSource[].diffuse fragment shader ATI bug?



SCOTTYC
11-12-2005, 08:41 PM
If I try to access the built in gl_LightSource[0].diffuse parameter in a fragment shader it is black. It works fine in the vertex shader and I can pass it to the fragment shader in a varying variable. The gl_LightSource[0].specular works fine in both fragment and vertex shaders. I am running the Catalyst 5.9 drivers on a ATI Radeon X800 Pro. Can anyone else confirm this?

canuckle
11-14-2005, 08:00 AM
sounds like your bug might be related to this:

http://www.opengl.org/discussion_boards/cgi_directory/ultimatebb.cgi?ubb=get_topic;f=11;t=000954

SCOTTYC
11-14-2005, 08:05 AM
Ya, I thought so too, but I tried the solution proposed in the thread...
glGetIntegerv( GL_CURRENT_PROGRAM, &active_programID );
glUseProgram( active_programID );

...and it does not help.

canuckle
11-14-2005, 09:41 AM
Try calling glUseProgram() after every time the diffuse color is changed, as Humus suggested. If this turns out to be too costly, you might be stuck with using user-defined attributes for now =/

SCOTTYC
11-14-2005, 07:24 PM
Actually I am already doing that. In my app, the gl light positions, colors, etc. are updated every frame and then every frame in my draw routine, I call glUseProgram object.

Also, I have tried using user defined uniforms for the lights' diffuse color values. This has it's own problems which i'm not sure are more bugs or just me doing something wrong. This can get rather lengthy to describe, so please bear with me.

I started off thinking that I would pass in an 8 element uniform vec4 array that holds the colors of the 8 potential gl lights. The array in the fragment shader is called diffColor and it is delcared like this:

uniform vec4 diffColor[8];

Then, in the main() function, I replaced the line:

diffuse += gl_LightSource[i].diffuse * nDotVP;

with:

diffuse += diffColor[i] * nDotVP;

Now, this produces the correct lighting, but I went from about 500fps to 1fps. I couldn't beleive it. So I tried something simple. I changed the line above to:

diffuse += diffColor[0] * nDotVP; // Changed i to 0

Ok, now i'm back to 500 fps and I am still getting correct lighting since there is only 1 light in my scene. So then I change the line to:

diffuse += diffColor[1] * nDotVP;

Now the object renders black again, as it should, since only light 0 is active. Still 500fps though. So I change it to:

diffuse += diffColor[2] * nDotVP;

Now I hit some bottleneck, because there I go down to 1 fps. Anything above 2 for the index makes the fps go down to 1.

One other thing I tried was not passing in the array as a uniform, but just having it delcared with values in the main() body like so:

vec4 diffColor[8];
diffColor[0] = vec4(1.0, 1.0, 1.0, 1.0);
diffColor[1] = vec4(0.0, 0.0, 0.0, 0.0);
diffColor[2] = vec4(0.0, 0.0, 0.0, 0.0);
...
diffColor[8] = vec4(0.0, 0.0, 0.0, 0.0);

This has the same exact affect that declaring it as a uniform does. It appears to have something to do with accessing the array elements. Can anyone explain this to me? It seems like I should not be hitting any hardware limits with this.

Thanks for reading my long explanation and any help is greatly appreciated.

SCOTTYC
11-18-2005, 09:44 AM
Can anyone help me on this one or at least verify what I claim?
Thanks

BTW, is there a web page that lists known GL and GLSL bugs for ATI and NVIDIA drivers?

Humus
11-18-2005, 04:07 PM
Could you post the full shader? Also, try upgrading the drivers. We're at Catalyst 5.11 now. I know there were some changes recently to spare arrays, which got rid of some problems with arrays of structs like gl_LightSource[]. Also, in the cases were you get 1fps, are you getting software rendering? (Check the infoLog)