Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 5 of 5

Thread: Projection of the camera depth into the light space

  1. #1
    Newbie Newbie
    Join Date
    Nov 2016
    Posts
    3

    Projection of the camera depth into the light space

    Hi everyone,

    I'm looking for some informations about a test I want to perform. First I render the depth from the camera point of view and then, I need to project this texture into the light image space, any clue of how to do this projection ?

    Thank !

  2. #2
    Senior Member OpenGL Guru Dark Photon's Avatar
    Join Date
    Oct 2004
    Location
    Druidia
    Posts
    4,425
    Look for any good tutorial on Shadow Mapping. It sounds like it's transformation chain is exactly the opposite of what you want to do. Just switch the terms "camera" and "light", and you should be good.

    For a good diagram of the spaces involved and the order you'd compose the transformations between the spaces, see this one from Paul's Shadow Mapping Project:



  3. #3
    Junior Member Newbie
    Join Date
    Nov 2016
    Posts
    7
    If you have a point based on the camera projection view, then you have the following matrix:

    mat4 CameraView = RotationMatrix * TranslationMatrix

    mat4 CameraProjectionView = CameraProjection * Camera View;

    We want the point on world space
    vec4 WorldPoint = CameraProjectionView.Inverse * MyPoint;

    Now we want that point on light view projection space

    vec4 LightVPPoint = LightProjection * LightView * WorldPoint;

    You need to divide it by w to clip it.

    LightVPPoint /= LightVPPoint.w

    If you have the world point, you just need to multiple it by the light projection view.

    If you want that light point to be a texture coord, you can do this:

    vec4 LightVPPoint = BIAS * LightCameraProjection * LightCameraView * WorldPoint;

    Where BIAS is a matrix4 which can be constructed by:

    ScaleTransformation(0.5f,0.5f,0.5f) * TranslationTransformation(1,1,1)

    It will just do the next operation with your vector: (v*0.5f + vec4(0.5))
    Last edited by Harukoxd; 11-05-2016 at 06:59 PM.

  4. #4
    Newbie Newbie
    Join Date
    Nov 2016
    Posts
    3
    Thank for your answers both of you. I think that I have a clearer vision of what to do.

    But, I still have trouble, and I don't know why.

    My depth is render from the camera point of view, next I use shader to project this texture in a other texture in light space. I use a quad to render

    Here my GLSL code (very basic)

    Vertex shader
    Code :
     #version 400
    layout(location = 0) in vec3 position;
    layout(location = 2) in vec2 texCoords;
     
    out vec4 TexCoords;
    uniform mat4 Tmat;
     
    void main() {
    	gl_Position = vec4(position.x, position.y, 0.0f, 1.0f);
    	TexCoords = Tmat*gl_Position;
    }

    Fragment shader
    Code :
    #version 400
     
    in vec4 TexCoords;
     
     
    uniform sampler2D inputTexture;
     
     
     
    void main()
    {
    	vec2 projCoords = TexCoords.xy / TexCoords.w;
    	// Transform to [0,1] range
    	projCoords = projCoords * 0.5 + 0.5;
     
    	float depth = texture(inputTexture, projCoords).r;
    	gl_FragDepth = depth;
    }

    Where Tmat is the matrix : Tmat = CameraProj * CameraEye * (CameraProj * CameraEye).Inverse

    I need to convert the light coordinate of the quad into texture coordinate in camera space, maybe I didn't understand the concept afterall.

  5. #5
    Newbie Newbie
    Join Date
    Nov 2016
    Posts
    3
    I managed to get everything working, thnaks a lot for your help folks !

    MrJack.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •