lighting too light

What could produce a scene more light than it should be (ie not using shaders I have very more dark results) ?

I guess I do it all the good way, but as it is too light, I also guess I missed some points.

I’m basing on the GLSL orange book for that point. And I use local viewer lighting model.

I was guessing my normals or other vectors were not normalized but I ensured they are. I also ensure some values are well clamped between 0 and 1.

Thanks in advance.

Do you implement correct attenuations? Can you post screenshots and your GLSL shader and fixed function setup?

I’m pretty sure my attenuations are well. Code is below. I use exactly the same lightings and material for both non-shaders and shaders tests.

Okay here they are. Here is the directory to access them:

http://dagecko.free.fr/images

// vertex shader:
   gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; // to clip-space
   vec4 ev4_pos = gl_ModelViewMatrix * gl_Vertex;
   ev_pos = vec3 (ev4_pos)/ev4_pos.w;
   v_to_l = gl_LightSource[0].position.xyz - ev_pos;
   dist = length (v_to_l);
   normalize (v_to_l);
   normal = gl_NormalMatrix * gl_Normal;
//   normal = normalize (normal);

// fragment shader:
   vec4 amb, dif, spe, amb_g;
   float pf;
   
   float dot_p = max (0., dot (normal,v_to_l));
   
   // I assume we're in a local viewer lighting model
   vec3 eye = -normalize (ev_pos);

   vec3 hv = normalize (v_to_l + eye);
   float dot_h = max (0., dot (normal,hv));
   
   if (dot_p == 0.0)
      pf = 0.0;
   else
      pf = pow (dot_h, gl_FrontMaterial.shininess);

   float att = 1./ (gl_LightSource[0].constantAttenuation +
	       gl_LightSource[0].linearAttenuation * dist +
	       gl_LightSource[0].quadraticAttenuation * dist * dist);
   
   amb = att * gl_LightSource[0].ambient * gl_FrontMaterial.ambient;
   amb_g = gl_LightModel.ambient * gl_FrontMaterial.ambient;
   dif = att * dot_p * gl_LightSource[0].diffuse * gl_FrontMaterial.diffuse;
   spe = att * pf * gl_LightSource[0].specular * gl_FrontMaterial.specular;

// C code:
   ShaderMgr::UseShader (s_program);
   s_program.GetUniform ("depth_sampler").Uniform ((GLint)1);
   s_program.GetUniform ("tex_sampler").Uniform ((GLint)0);

   glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
   glViewport (0,0,800,600);
   glMatrixMode (GL_PROJECTION);
   glLoadIdentity();
   gluPerspective (60, 4./3., 1.0, 100.);
   glMatrixMode (GL_MODELVIEW);
   glLoadIdentity();
   gluLookAt (4,4,4,0,0,0,0,1,0);
   glGetFloatv (GL_MODELVIEW_MATRIX, &view_matrix[0][0]);
   InvertMatrix (view_matrix, inv_view_matrix);
   
   glLightfv (GL_LIGHT0, GL_POSITION, l_pos);
   glLightfv (GL_LIGHT0, GL_AMBIENT, l_amb);
   glLightfv (GL_LIGHT0, GL_DIFFUSE, l_dif);
   glLightfv (GL_LIGHT0, GL_SPECULAR, white);

   glActiveTexture (GL_TEXTURE1);
   glBindTexture (GL_TEXTURE_2D, shadow_map_tex);
   glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE_ARB, GL_COMPARE_R_TO_TEXTURE);
   glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC_ARB, GL_LESS);
   glTexParameteri (GL_TEXTURE_2D, GL_DEPTH_TEXTURE_MODE_ARB, GL_INTENSITY);

   glMatrixMode (GL_TEXTURE);
   glLoadIdentity();
   glTranslatef (.5,.5,.5);
   glScalef (.5,.5,.5);
   glMultMatrixf (bias_matrix);
   glMultMatrixf (light_projection_matrix);
   glMultMatrixf (light_view_matrix);
   glMultMatrixf (&inv_view_matrix[0][0]);
   glMatrixMode (GL_MODELVIEW);

   RenderScene();

   glActiveTexture (GL_TEXTURE1);
   glBindTexture (GL_TEXTURE_2D, 0);
   
   ShaderMgr::UseFixedPipeline();

From first look at your vertex shader:
You are not correctly normalizing the v_to_l vector because you forget to store result of the normalization:

normalize (v_to_l);

so your diffuse contribution is scalled and half vector has incorrect direction.

While not entirely related it would be beter to normalize the vectors in fragment program instead of vertex program because otherwise the interpolation between vertices may cause vectors to shorten.

You’re great ! Each time I move from C++ to another language it’s a pain for me (and even for C)… And as often this was a dumb bug.

Yes I have done all the calculations in the fragment shaders before but I recently move some in the vertex shader because it could speed things up a bit. Things are so slow on my GPU (geforce FX) so that I’m looking ahead for any optimizations I could do.

Could you explain more your last sentance ?

Notes: I just uploaded the new snapshot and it still looks a bit lighter (just a bit). I’ll have a deeper look at the shaders.


Things are so slow on my GPU (geforce FX) so that I’m looking ahead for any optimizations I could do.

On FX cards it will likely be advantageous to use normalization cubemap (cube texture set in such way that lookup at coordinates (x,y,z) returns normalize(x,y,z) encoded in colors) instead of the normalize() function.


Could you explain more your last sentance ?

The vectors you output in vertex program are interpolated lineary between vertices so they may become unnormalized. For example: You output normalized vector (0.707106, 0.707106, 0) from the first vertex and normalized vector (-0.707106, 0.707106, 0) from the second one. In the middle of the path between those vertices the interpolation will result in vector (0, 0.707106,0) which is not normalized and its length is 0.707106. This will result in lighting becoming darker at that point.


Notes: I just uploaded the new snapshot and it still looks a bit lighter (just a bit). I’ll have a deeper look at the shaders.

The image in not only a bit lighter, there is a spiral like pattern on the teapot. You probably should display individual components of the shader calculations and look where that pattern comes from, this may be the place causing the lighter image as well.

I generally never say GL to normalize my normals because I’m certain they all are normalized and do my best to avoid scalings in the transformations I use. But that might not be a generality for all conditions. So I’ll have a look at the normalization maps.

I understand the second point now. There wasn’t this problem with the fixed pipeline just because lighting calculations were done in the vertex side, not the fragment. Isn’t it ? So, I’ll quickly move back the calculations to the fragment shader.

Sorry for the spiral, this is because I added a texture on the teapot. But I don’t guess this is why the scene is more enlightened.

Thanks, you were of great help.


I understand the second point now. There wasn’t this problem with the fixed pipeline just because lighting calculations were done in the vertex side, not the fragment.

Exactly.


Sorry for the spiral, this is because I added a texture on the teapot. But I don’t guess this is why the scene is more enlightened.

I currently can not see any additional problem with the lighting part of the shader you post here if you are calculating a point light (before the shadowmap application). Are the calculated factors (dif, spe,amb) combined together correctly?
You probably should try to separately enable individual contributions in both fixed function setup and shader and see which are different.

While maybe not completly related. I once had a problem on nVidia card where part of fixed function state was not updated in the fragment shader until next time a shader was bound.

While maybe not completly related. I once had a problem on nVidia card where part of fixed function state was not updated in the fragment shader until next time a shader was bound.
yes this problem still exists.
this should go into the wiki

I had a deeper look at the shaders but I still don’t see why I have a lighter scene.

Here is the whole fragment shader:

uniform sampler2DShadow depth_sampler;
uniform sampler2D tex_sampler;

varying vec4 tex_coord;
varying vec3 ev_pos;	   // eye-view vertex position
varying vec3 normal;	   // transformed normal

void main (void)
{
   vec4 amb, dif, spe, amb_g;
   float pf;
   
   vec3 v_to_l = gl_LightSource[0].position.xyz - ev_pos;
   float dist = length (v_to_l);
   v_to_l = normalize (v_to_l);
 
   float dot_p = max (0., dot (normal,v_to_l));
   
   // I assume we're in a local viewer lighting model
   vec3 eye = -normalize (ev_pos);

   vec3 hv = normalize (v_to_l + eye);
   float dot_h = max (0., dot (normal,hv));
   
   if (dot_p == 0.0)
      pf = 0.0;
   else
      pf = pow (dot_h, gl_FrontMaterial.shininess);

   float att = 1./ (gl_LightSource[0].constantAttenuation +
	       gl_LightSource[0].linearAttenuation * dist +
	       gl_LightSource[0].quadraticAttenuation * dist * dist);
   
   amb_g = gl_LightModel.ambient * gl_FrontMaterial.ambient;
   amb = att * gl_LightSource[0].ambient * gl_FrontMaterial.ambient;
   dif = att * dot_p * gl_LightSource[0].diffuse * gl_FrontMaterial.diffuse;
   spe = att * pf * gl_LightSource[0].specular * gl_FrontMaterial.specular;

   if (tex_coord.z > gl_DepthRange.near){
      vec4 s = shadow2DProj (depth_sampler, tex_coord);
      s = clamp (s, vec4 (0,0,0,0), vec4 (1,1,1,1));
      gl_FragColor = amb_g + (amb + dif + spe) * s;
   }
   else{
      gl_FragColor = amb_g + amb + dif + spe;
   }
}
gl_FragColor = amb_g + (amb + dif + spe) * s;

This is propably not what is calculated in fixed function pipeline with shadowmaps unless you do some multipassing. In fixed function pipeline you will get something like

gl_FragColor = (amb_g + amb + dif + spe) * s ;

or, if separate specular color is enabled

gl_FragColor = (amb_g + amb + dif ) * s + spe ;

This is because everything in the brackets is given as fragment color at the input to texture environments and, for separate specular color, the specular addition is done after texture applications.

Is there any difference in the lighting intensity if you disable the application of the shadowmap in both the FF and shaders?

Some tips:
You do not need to clamp the s value into <0,1> range, shadow2DProj should not return values from different range.

If you simultaneously need to know length of vector and have it normalized, it is possible that following code will be faster than using separate length() and normalize() operations.

vec3 normalized_v_to_d = normalize( v_to_l )
float dist = dot( normalized_v_to_d, v_to_l )

I don’t enable separate specular color for the moment. From what I know I need second color beeing enabled and set.

For the frag color, I was pretty sure the global ambient has nothing to do with the shadow color. When absence of light, the global ambient is still present, isn’t it ? (I need a good book about lighting, this actually really turns out for me).

Yes there’s still a difference, the same, when I disable the shadow in both of them. So there’s still something hurting the lightings.

Thanks for the tips.

EDIT: I just have thought of something I’m gonna try at a moment since you discussed about clamping. I’m gonna try to clamp all the lighting components between 0 and 1 separately then in the final calculations to see if that could be the problem. I can actually only think of that ‘problem’ to render a more enlightened scene.


For the frag color, I was pretty sure the global ambient has nothing to do with the shadow color. When absence of light, the global ambient is still present, isn’t it ?

In standard OGL lighting the global ambient color is added with diffuse and ambient lighting from all enabled lights to form single fragment color that is send to texture environment so is subject to operations you do there. If you modulate the color with result of shadowmap sampling, the global ambient is modulated too.

There is one perfectly correct sitation in which lighting calculated per pixel may be more bright than lighting calculated per vertex. This is when your surface has big triangles and light is close to center of the triangle. In this case lighting at vertices should be the same in both situations however lighting at center of the triangle will be much more bright in the pixel case.

Ah, don’t know if this is why, but I’m in this situation: the floor is only made of big triangles, and the light is not far from the center of it. Will try to change it in order to see what might change.

When using only ambient term, the scenes are enlightened the same way, no difference but the fact that the shader’s one displays a shadow whereas the FP one doesn’t. So strange to me…

When adding the diffuse term, things turned bad. It is more enlightened on the shader’s one.
So I tried to move the light more far but this changed nothing. I ensured I use the same diffuse value on both.

Also, would like to add that if I use the globial ambient term inside the lighting calculations for the shadow, as you stippled, I got a black shadow, whereas if I do it as I done it before (letting it outside of the calculations), the shadow more looks like in the FP (more grey). This makes me (just) gessing I should leave it outside the calculations. This changes nothing about the difference in the lighting in the 2 versions.

Do you apply the ambient color in another pass in the FF variant? Are you using the GL_ARB_shadow_ambient extension?

Can you send me your test program?

In the FF version I render in two passes: first, the ambient pass, then the shadow pass.

I don’t use this extension, just saw the specs at the moment. I don’t use neither texture combiners. So, I guess one error is from there.

I’ll mail the test codes.

In the FF version I render in two passes: first, the ambient pass, then the shadow pass.

In that case the ambient therm should be outside the calculations for the shadows like you had it.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.