lighting calculation in eye coordinates

Hello forum,

I came to know that we calculate the lighting in eye coordinates. In that case we have to transform the light position to the eye coordinates using the model-view matrix multiplication. The point of my confusion is that when do we have to do this multiplication.

When i send the light position from the application to the shader, we do it as follows:



glm::vec4 lightPosition = glm::vec4(....);

glUniformMatrix4fv((*shader)("lightPosition"),1,glm::value_ptr(model-view * lightPositon));


But i also came across some code snippet(OpenGL Superbible recent edition), where the light position is defined inside the shader and the light position did not go through the model-view transformation.

I am puzzled here. What is that I am missing conceptually ?

Thanks

Depends on what you want to achieve.

Many sample programs use hard-coded eye-space light positions in the shader code, to simplify the examples.
Sending the light position to the shader in whatever frame of reference, or even transforming it in the shader,
blows up the size of the code. Explaining eye-space transformation is unneccessary and out of place in a tutorial
chapter explaining the Blinn-Phong lighting model.

In a real application, you might want to keep track of light positions in world space in the application and transform
them to eye-space when sending them to the shader, again, depending on what you want to achive.

That’s how the fixed-function lighting works, but you don’t have to use eye coordinates. However, you do (realistically) have to perform lighting calculations in a coordinate system which is affine to “world space”. That’s why we have separate model-view and projection matrices; performing lighting calculations in a projected space will tend to produce clearly-wrong results.

You could perform lighting calculations in object space, but then you’d have to transform the light position into object space (along with anything else you need, e.g. eye position for specular reflection, reference axes for environment maps, etc). If different objects have different coordinate systems, that’s at least as much work as transforming all objects into eye space (rather than transforming them directly to clip space), and often more.

Also, the fact that the eye position is always the origin in eye space helps simplify the calculations slightly.

Whichever space you use, all of the vectors (vertex position, eye position, light position/direction, etc) must be in the same space. But the lighting calculations are invariant under translation and rotation (and can easily be adapted to handle other affine transformations), so the choice of coordinate space tends to be dictated by convenience.