Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 9 of 9

Thread: View matrix for point light calculation should be inverse?

  1. #1
    Intern Contributor
    Join Date
    Mar 2011
    Location
    Israel
    Posts
    56

    Question View matrix for point light calculation should be inverse?

    Hi All. I have the following question.I am coding the point lights.I want to use camera space .I need to transform light position using camera (view) matrix.But I don't understand,
    should be the same inverted camera matrix I use to move the camera? Or I should pass the camera (view) matrix without inverse ?
    Thanks .

  2. #2
    Senior Member OpenGL Pro
    Join Date
    Apr 2010
    Location
    Germany
    Posts
    1,135
    Why would you use the inverse? The inverse view matrix is only necessary if you want to go from eye-space to world-space. So if you want to do your lighting calculations in eye-space simply transform every entity involved in the calculation to eye-space an give it a go.

  3. #3
    Intern Contributor
    Join Date
    Mar 2011
    Location
    Israel
    Posts
    56
    Quote Originally Posted by thokra View Post
    So if you want to do your lighting calculations in eye-space simply transform every entity involved in the calculation to eye-space an give it a go.
    That is what I don't get. What you mean is to take the camera position (its model matrix) and use it to transform every entity ?

  4. #4
    Senior Member OpenGL Pro
    Join Date
    Apr 2010
    Location
    Germany
    Posts
    1,135
    What you mean is to take the camera position (its model matrix) and use it to transform every entity ?
    Uhm, no. You want to perform calculations on entities which all reside in the same space - otherwise you'll get false results. This has nothing to do with the world-space camera position. The result of the transformation into eye-space is that the camera in this space is implicitly located at the origin - i.e. (0, 0, 0) - and all other objects are then defined relative to the camera's coordinate system.

    Do you know how the transformation pipeline works and what the purpose of it is?


    Edit: To explain it a little better, think of a point light at world-space position L, a camera at world-space position C and some world-space point P in space being lit by the light and looked-at by the camera. Let N be the normal at P.

    In world-space the inverse light incidence I vector is simply

    I_world = L_world - P_world

    the inverse viewing direction is simply V

    V_world = C_world - P_world

    In a Phong shader you could now use I, V and N to determine the specular reflection at P. However, for this world-space calculation you actually need all world-space positions. In eye-space this is not the case anymore. If you transform P and L into eye-space using the view-matrix and N to eye-space using the inverse transpose of the view matrix you already know the camera position implicitly: it's simply at the origin. Now, the I vector is still calculated using

    I_eye = L_eye - P_eye

    V, however comes down to

    V_eye = C_eye - P_eye = (0, 0, 0) - P_eye = -P_eye

    See the difference? In any case, it doesn't matter in which space you calculate. Just be consistent. In some cases you might save some data, e.g. you don't need a cam position in eye-space, but on the other hand you have three additional transformations and have to think a little differently. All spaces have their purposes but mathematically it doesn't matter as long as the space for all entities in the calculation is the same.

    HTH.
    Last edited by thokra; 07-12-2012 at 04:42 AM.

  5. #5
    Intern Contributor
    Join Date
    Mar 2011
    Location
    Israel
    Posts
    56
    I do understand.But it seems you don't get what I am asking. I just made a test.I passed to the light shader the camera matrix without inverting it.So now if I move the camera let's say to the right of the lighted
    object which is in the middle of the scene I get the lights showing smaller and smaller and disappear abruptly fromthe object surface once the camera gets to some offset on X axis Which is this case just 200 units but the cam still looks at the surface ..
    But if I pass the camera inverse matrix which I usually pass into vertex shader to calculate View perspective matrix then the lights positions stay all right. So what you just said doesn't look right unless we have
    a misunderstanding here.

  6. #6
    Senior Member OpenGL Pro
    Join Date
    Apr 2010
    Location
    Germany
    Posts
    1,135
    camera inverse matrix
    Ah, now I get it. You call camera inverse matrix what most people call the view-matrix. I thought you were asking about the inverse view-matrix which is definitely the wrong answer. Yeah, the view-matrix basically represents the opposite camera movement and rotation - so one could call it inverted. However, I've yet to see someone, besides you, to call the view-matrix the camera inverse matrix.

    But I don't understand, should be the same inverted camera matrix I use to move the camera?
    BTW, the view-matrix does actually the opposite of moving the camera, it moves everything else.

  7. #7
    Intern Contributor
    Join Date
    Mar 2011
    Location
    Israel
    Posts
    56
    BTW, the view-matrix does actually the opposite of moving the camera, it moves everything else.
    I know it too . So your answer is that I am doing it the right way ?

  8. #8
    Senior Member OpenGL Pro
    Join Date
    Apr 2010
    Location
    Germany
    Posts
    1,135
    Looks like it. Are the results correct? If so, you're doing it right. I mean, verifying a trivial lighting setup isn't that hard.

  9. #9
    Senior Member OpenGL Guru Dark Photon's Avatar
    Join Date
    Oct 2004
    Location
    Druidia
    Posts
    4,425
    Quote Originally Posted by SaSMaster
    ...using camera (view) matrix.But I don't understand, should be the same inverted camera matrix I use to move the camera?
    In more conventional terminology, the MODELING transform for the camera takes you from EYE-SPACE to WORLD-SPACE (EYE-SPACE being the OBJECT-SPACE of the camera).

    If you want the VIEWING transform (WORLD-SPACE to EYE-SPACE transform), you intuitively just invert the camera's MODELING transform.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •