Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 4 of 4

Thread: Understanding light positions and storing to eye space coordinates

  1. #1
    Junior Member Newbie
    Join Date
    Sep 2008
    Posts
    21

    Understanding light positions and storing to eye space coordinates

    I am writing a very simple directional light shader and I want to pass in my own light position coordinates (treated more like a directional light) instead of using gl_LightSource[0].position. I've been searching around and couldn't find any clear explanation on how to store my light position coordinate into eye space coordinates.

    I've tried multiplying my vector with gl_ModelViewMatrix in the shader, and I've tried doing it on the CPU side but still can't produce the same result as gl_LightSource[0].position.

    The light position (or direction rather) is (-1, 0, 0)

    vertex shader:

    Code :
    varying vec3 vertex_light_position;
    varying vec3 vertex_normal;
     
    uniform vec3 light_position;
     
    void main() {
        gl_FrontColor   = gl_Color;
        gl_Position     = gl_ProjectionMatrix * gl_ModelViewMatrix * gl_Vertex;
        gl_TexCoord[0]  = gl_MultiTexCoord0;
     
        vertex_normal = normalize(gl_NormalMatrix * gl_Normal);
     
    // using my own light position coordinate
        vertex_light_position = normalize(light_position);
     
    // old version
        //vertex_light_position = normalize(gl_LightSource[0].position.xyz);
    }

    So basically my question is how do I properly store my vector into eye space coordinates?
    I appreciate anyone taking the time to explain this to me. Thanks

  2. #2
    Senior Member OpenGL Guru Dark Photon's Avatar
    Join Date
    Oct 2004
    Location
    Druidia
    Posts
    3,188
    Quote Originally Posted by SamKV View Post
    I am writing a very simple directional light shader and I want to pass in my own light position coordinates (treated more like a directional light) instead of using gl_LightSource[0].position. I've been searching around and couldn't find any clear explanation on how to store my light position coordinate into eye space coordinates.
    We need to see your frag shader here to be sure, but I think what you might be missing here is the distinction between a directional light source and a positional (point) light source.

    Positional (point) light source location is intuitively identified by a "point" position (e.g. (1,2,3,1) as a vec4, where the last 1 identifies this as a point, not a vector. Directional light source location is (as you know) identified by a vector (e.g. (-1,0,0,0) as a vec4, where the last 1 identifies this as a vector, not a point).

    I've tried multiplying my vector with gl_ModelViewMatrix in the shader, and I've tried doing it on the CPU side but still can't produce the same result as gl_LightSource[0].position.

    The light position (or direction rather) is (-1, 0, 0)
    Is this the value you are providing to OpenGL's API to set the light position? If so, this value is in the then-active object-space, not eye-space. Remember that with the old fixed-function lighting APIs, what you provide to OpenGL your light source position is immediately transformed by the then-active MODELVIEW matrix to get an eye-space position. For instance, take (-1,0,0,0), multiply it by the MODELVIEW, and then store off that vector. This is then stored off and is what is provided to gl_LightSource[#].position. Then check your frag shader and verify that you are using this light position vector properly (as a direction vector, not a position vector).

  3. #3
    Junior Member Newbie
    Join Date
    Sep 2008
    Posts
    21
    Here's the fragment shader. I've been updating since the last time I posted this so some uniform variables were renamed. I also had to strip out a bunch of other code for the sake of this topic:

    Code :
    uniform sampler2D diffuse1;
    uniform sampler2D diffuse2;
     
    uniform vec3 light_GlobalDirection;   // renamed from vertex_light_position
    uniform vec4 light_GlobalColor;
    uniform vec4 light_GlobalAmbience;
     
    varying vec3 vertex_normal;
     
    void main() {
        float diffuse_value = max(dot(vertex_normal, light_GlobalDirection), 0.0);
        vec4 frag = texture2D(diffuse1, gl_TexCoord[0].st);
        vec4 color = frag * light_GlobalAmbience;
     
        gl_FragColor = color + (frag * (light_GlobalColor * diffuse_value));
    }

    I think I was multiplying my modelview matrix while it already had the camera's position transform, so basically I need to multiply the directional vector with the modelview matrix before I add the camera position. Also interesting info on the vec4->w bit. I'll need to experiment with that.

  4. #4
    Senior Member OpenGL Guru Dark Photon's Avatar
    Join Date
    Oct 2004
    Location
    Druidia
    Posts
    3,188
    If you're only doing a directional light source, you can ignore the whole vec4/.w thing. All you need is a direction vector, always, so you can just use a vec3 and treat it like the vector it is.

    Just make sure you have all your vectors (or points for that matter) in the same space before you go performing operations on groups of them.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •