Easier method for lights in vertex program?

When dealing with a single, static model, lights are no problem. The light vector is calculated as the difference between the untransformed light position and the untransformed vertex position. Nothing complicated at all, eh?

However, trouble begins when one is dealing with multiple models. Since each model has its own transformation matrix to move it into position in the scene, one has to make sure to transform the position of the light to be correct relative to the model. Somehow, I’ve been having an extremely difficult time with this.

My solution was to use a program matrix that contains the position and orientation of the mesh itself, ignoring the perspective and the camera’s position and orientation. I take the inverse of that matrix in the vertex program and multiply it with the light position to get the light’s relative position. Here’s some code:

When I’m rendering each model:

float afMatrix[16] = {1.0f, 0.0f, 0.0f, 0.0f,
					  0.0f, 1.0f, 0.0f, 0.0f,
					  0.0f, 0.0f, 1.0f, 0.0f,
					  m_afPosition[0], m_afPosition[1], m_afPosition[2], 1.0f};

glMatrixMode (GL_MATRIX10_ARB);
glLoadMatrixf (afMatrix);

glMatrixMode (GL_MODELVIEW);

And a snippit from the vertex program:

DP4 tLightPos.x, iIPM0[0], iLightPos;
DP4 tLightPos.y, iIPM0[1], iLightPos;
DP4 tLightPos.z, iIPM0[2], iLightPos;
DP4 tLightPos.w, iIPM0[3], iLightPos;

SUB tLightPos, tLightPos, iPos;

Where iIPM0 is the inverse of program matrix 0, iLightPos is the untransformed light position, and iPos is the untransformed vertex position.

Is there a way to do this where I don’t have to create a whole new matrix just so I can reliably transform the light position?

More code available upon specific request. . .

Why not just using the inverse model matrix,
which you can use at no cost in a vertex
program?

You are making this more difficult than it is.

Just convert your light positions and directions into model space before submitting them to the vertex program. (Saves calculating the same data for every vertex)

Something like the following (assuming light data is in world space):

//Setup variables
Matrix4 worldToModel;    
Matrix3 worldToModel3x3;

worldToModel=modelView->GetInverse() * view->GetCamera()->GetViewTransform();
worldToModel.GetMatrix3(worldToModel3x3);


//Load the transformed light positions
Switch(...)
{

  case(RP_ModelSpace_LightDirection):
    vector3 = worldToModel3x3 * light.ldirection;
    vp->SetLocalConstant(loadPos,vector3);
    break;

  case(RP_ModelSpace_LightPosition):
    vector3 = worldToModel * light.lposition;
    vp->SetLocalConstant(loadPos,vector3);
    break;
  case(RP_ModelSpace_CameraPosition):
    vector3 = worldToModel * view->GetCamera()->GetPosition();
    vp->SetLocalConstant(loadPos,vector3);
    break;
}

[This message has been edited by sqrt[-1] (edited 10-28-2003).]

Ostol, really, I don’t understand your problem… You specify lights in eye space and then transform the light with the inverse modelview matrix - you get your light in object space. It did worked for me.

Indeed, I don’t understand my problem either.

I’ve found that using the inverse modelview matrix to transform my light position certainly does work, but only until I move or rotate the camera. Once I do that, the lighting clearly starts to become wrong. It makes sense, too, since the camera’s orientation and position affect that same matrix. What I did with the program matrix is isolate the model’s position and orientation.

sqrt[-1]'s suggestion is basically what I am already doing, though not in the vertex program. I’d have to recalculate the light position for each model, but I suppose that certainly is better than for each vertex. I guess I’ll go with it. . . Thanks. . .