Light in object space

Hello everyone,
I was wrote a simple glsl diffuse vertex lighting to test my rendering framework and for some reason either the light position transforms to object space or the glMultMatrixf affects the normals in a weird way because, well the entire lighting is always in object space. So if I put the point light at ( 10, 0, 0 ), no matter where the object is, it always receives the light from its +x axis. Again, if the light is at ( 0,0,0 ) the light is treated as if its in the origin of the object therefore the dot product is always negative even though the object may be at say 0,10,0 and should be receiving the light from the bottom. I tested it with a glut sphere to see if my normal calculations were off but same problem.

I’d appreciate if anyone can help me out here.

Thanks.

Vertex Program


varying float Diffuse;
uniform vec3 lightPosition;

void main(void)
{
	gl_Position = ftransform();
	vec3 Normal = gl_NormalMatrix * gl_Normal;
        vec3 vPosition = ( gl_ModelViewMatrix * gl_Vertex ).xyz;
	vec3 Light 	= normalize( lightPosition - vPosition );
	Diffuse 	= max( dot(Normal, Light), 0.0 );
}

Fragment Program


varying float Diffuse;

void main(void)
{
	gl_FragColor = Diffuse * vec4(0,0,1,1);
}

=================


void
Renderer::_drawEntity( Entity* entity )
{
	//Apply transformation
	glPushMatrix();
	glMultMatrixf( reinterpret_cast<float*>( entity->getTransform() ) );

		//Apply shader
		uInt shader = entity->getShader();
		if( shader != NO_SHADER )
		{
			//Don't bother calling OpenGL if we're gonna use the previous shader
			if( shader != activeShader_ )
			{
				glUseProgram( shader );
				activeShader_ = shader;
			}

			Map<String, ParameterInfo> parameters = entity->getShaderParameters();
			Map<String, ParameterInfo>::iterator iter;

			for( iter = parameters.begin(); iter != parameters.end(); iter++ )
			{
				switch( iter->second.type )
				{
				case PT_3FV:
					{
					float values[3] = { iter->second.value[0], 
										iter->second.value[1], 
										iter->second.value[2] };

					glUniform3fv( glGetUniformLocation( shader, 
														iter->first.c_str() ), 
														1, 
														values );
					}
					break;
				case PT_F:
					{
					float values[1] = {iter->second.value[0]};
					glUniform1fv( glGetUniformLocation( shader,
														iter->first.c_str() ),
														1,
														values );
					}
					break;
				}
			}
		}

		else
		{
			if( activeShader_ != 0 )
			{
				activeShader_ = 0;
				glUseProgram(0);
			}
		}

		//Clear and apply texture
		uInt diffuse = entity->getDiffuseMap();
		glBindTexture( GL_TEXTURE_2D, diffuse );
		if(  diffuse != NO_TEXTURE && shader != NO_SHADER )
		{
			int dUniformLocation = glGetUniformLocation( shader, "texture0");
			glUniform1i( dUniformLocation, 0);
		}

	if( entity->isGLUTSphere() )
	{
		glutSolidSphere( 1, 10, 10 );
	}

	else
	{
		//For each mesh...
		Mesh* mesh = entity->getMesh();
		Vector<Polygon>* polygons = mesh->getPolygonList();
		Vector<Vertex>*  vertices = mesh->getVertexList();

		//For each polygon of the mesh...
		for( uInt j = 0; j < polygons->size(); j++ )
		{
			Polygon polygon		 = (*polygons)[j];
			Vector<int>* indices = polygon.getIndices();
			Vec4D faceNormal = polygon.getNormal();

			glBegin( GL_QUADS );

			//For each vertex in each polygon of the mesh...
			for( unsigned int k = 0; k < indices->size(); k++ )
			{
				Vertex v = (*vertices)[(*indices)[k]];

				outline ? 0 : glColor3f( v.color.x, v.color.y, v.color.z );
				outline ? 0 : glTexCoord2f( v.texCoord.x, v.texCoord.y );
				glVertex3f( v.position.x, v.position.y, v.position.z );
				glNormal3f( v.normal.x, v.normal.y, v.normal.z );
			}

			glEnd();
		}
	}
	glPopMatrix();
}


Why don’t you transform the light position with the modelview matrix? You need to have vPosition and lightPosition vectors in the same space before doing any operation between them.

Yes I’ve obviously forgot to do that and to normalize the normal but that is not the problem.


varying float Diffuse;
uniform vec3 lightPosition;

void main(void)
{
	gl_Position 	= ftransform();
	vec3 Normal 	= normalize( gl_NormalMatrix * gl_Normal );
    vec3 vPosition 	= ( gl_ModelViewMatrix * gl_Vertex ).xyz;
	lightPosition	= ( gl_ModelViewMatrix * vec4( lightPosition, 1.0 ) ).xyz;
	vec3 Light 		= normalize( lightPosition - vPosition );
	Diffuse 		= max( dot(Normal, Light), 0.0 );
}

Just a remark and maybe it will not solve your problem. When you draw your mesh, you call glNormal after glVertex. You must call glNormal before glVertex otherwise all your vertex may have the wrong normal.

Thanks for pointing that out, that explains the problems I had with the meshes but yes that doesn’t fix lighting issue.

The ModelView matrix takes you from object space to view space. So the light position you specify is thus in object space and you get object space lighting. If you want view space lighting, do not transform it by ModelView. Instead you should transform it to view space yourself, either in code or by passing a light-to-view matrix to the shader.

Oh that’s right, and setting the light using glLight() transforms it to view space as well.

dodgemir, yes I should have been more precise, as lord crc said, you don’t have to transform the light with the same matrix that transforms the mesh if the light do not need to. I mean, it depends on you. Setting ligth with the opengl API calling glLight do transforms the light with the current modelview matrix (the one when setting light position in opengl program), before sending it the shader as a uniform (according to the glsl spec).

Doesn’t it transform with the inverse-transpose of the upper 3x3 of modelview instead?

No this is only relevant for normals transformation. And light is transformed with the whole 4x4 modelview matrix.

So if it’s a point light at B(x,y,z) than all I need to do is to multiply B(x,y,z) by the current view matrix?

If you want to keep giving the light position throw your custom uniform, you have to pre-multiply it by the proper modelview matrix that transforms light position from object space to eye space.
For example, meshes could be moved, oriented in the scene using the modelview matrix, putting in this last one, the matrix that transforms them into world space. Then the camera position and orientation affects the whole scene (view transformation). all visible objects are transformed including lights, so after computing the camera matrix, you transform the light position (I assume that it is already in world space), by the camera matrix before giving it to the shader program. You need to do this, because, the modelview matrix given to the vertex shader will be the one that transforms meshes.
Hoping I am clear, do not hesitate to ask more questions! :slight_smile:

Yes and my question is, if the light is already in world coordinates then the modelview matrix = view matrix, correct?

if the light is already in world coordinates then the modelview matrix = view matrix, correct?

Not necessary. You seem to not understand my last post. It is the modelview matrix at the moment you transform the light. The way your question is formulated, I can not answer by yes or not.

If a vertex coordinate is in world coordinate system what would I need to multiply it by to be able properly draw it on viewport? Isn’t it View Matrix * Projection Matrix?

Exactly! :slight_smile:

So…

with a small addition: “…to transform the light into eye space.”

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.