Positional light help

I’m trying to impliment positional lighting for my cel-shading shader. Here is my code:

uniform vec3 light;
varying float intensity;

void main()
{
   vec3 norm = normalize(gl_NormalMatrix * gl_Normal);

   intensity = dot(light,norm);

   gl_FrontColor = gl_Color;

   gl_Position = ftransform();

}

this code works if the object and light is static, however I need to apply it to moving objects also, right now the light moves with the object so it’s always shading the same place. what’s the best way to write it so the light can work with dynamic objects?

It’s very important that you keep the object and the light in the same coordinate system. That’s what usually drives me to that “light following the object” effect.

so, as i assume that the light is in world space, but the normal perhaps is not, but it is in object space. Could be that?
I see that you multiply the incoming normal by
somewhat i don’t know what is, the gl_Normalmatrix.
Ok, looking at google i find:

/* first transform the normal into eye space and normalize the result */
normal = normalize(gl_NormalMatrix * gl_Normal);

So, what we have here is that the normal is in eye space, but the light is in world space.
you could do several things to correct this, the easier is to transform the light to eyespace too (multipliying it by the modelview)
If, as i assume, the light is a direction, not a point, then you only need to multiply it by the rotation part of the modelview.

Toni

I tried multiplying the light with model view matrix, but it didn’t work out very well. Here is my code before switching to shader, basically I need to do exactly this in the shader to achieve the same positional lighting effect.

 M3DMat44f mat = fme.GetMatrix();

	M3DVec3f vecToLight;
	for (int i = 0; i < model->m_nNumVertices; i++)
	{
		M3DVec3f temp;
		temp.x = model->m_pVertices[i].data[0];
		temp.y = model->m_pVertices[i].data[1];
		temp.z = model->m_pVertices[i].data[2];

		// transform vertex by the frame's matrix
		M3DVec3f finalVec;
		m3dTransformPt(finalVec, mat, temp);

		// get vector to light from the vertex
		m3dVecSub(vecToLight, lightAngle, finalVec);
		m3dVecNormalize(vecToLight);

		M3DVec3f norm;
		norm.x = model->m_pNormals[i].v[0];
		norm.y = model->m_pNormals[i].v[1];
		norm.z = model->m_pNormals[i].v[2];

		// transform normal by the frame matrix
		M3DVec3f finalNorm;
		m3dTransformVec(finalNorm, mat, norm);

		TmpShade = m3dVecDotProduct(vecToLight, finalNorm);

		// Clamp The Value to 0 If Negative
		if (TmpShade < 0.0f)
			TmpShade = 0.0f;	

		*(shade_value + index) = TmpShade;
		index++;
	} 

Try to do all your calculations in light view coordinates.

I think I’m not understanding the math correctly. I assume in order to translate the light position all I need to do is multiply it by the gl_ModelViewMatrix? Right now I either get a effect where the light position stays with the object, or in the code below the light stays with the camera.

uniform vec3 light;
varying float intensity;

void main()
{

   vec3 lightDir = light;

   //lightDir = gl_ModelViewMatrix  * (vec4(lightDir,0.0));
   //vec3 lightDir3 = (vec3(lightDir));
   
   vec4 ecPos = gl_ModelViewMatrix * gl_Vertex;
   vec3 ecPos3 = (vec3(ecPos));

   vec3 norm = normalize(gl_NormalMatrix * gl_Normal);

   vec3 vToLight = light - ecPos3;

   normalize(vToLight);

   intensity = dot(vToLight,norm);

   gl_FrontColor = gl_Color;


   gl_Position = ftransform();
  
}

This code is pretty much the same as the C++ code above, so I think there is some GLSL spcific stuff I’m not getting.

Hi,

It looks to me like you’re mixing your spaces. You’ve got a uniform for your light position, which I presume is in world space, and you’re transforming your normal with the gl_NormalMatrix, which is the inverse transpose of the modelview matrix . The normal matrix is designed for use with lights in eye-space, not world-space, so that may be (near) the root of your trouble.

In the fixed function pipe, when you set the light position with the OpenGL API, the position you supply is transformed by the modelview matrix, just like points, so it ends up in eye-space along with everything else. With the programmable pipe, you can use whatever space you like, just be careful not to mix them up.

I don’t think there’s a clear advantage of one space over the other. I just use whichever makes sense for what I’m doing - whichever happens to be more convenient and/or efficient.

Sean

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.