Projection of the camera depth into the light space

Hi everyone,

I’m looking for some informations about a test I want to perform. First I render the depth from the camera point of view and then, I need to project this texture into the light image space, any clue of how to do this projection ?

Thank !

Look for any good tutorial on Shadow Mapping. It sounds like it’s transformation chain is exactly the opposite of what you want to do. Just switch the terms “camera” and “light”, and you should be good.

For a good diagram of the spaces involved and the order you’d compose the transformations between the spaces, see this one from Paul’s Shadow Mapping Project:

If you have a point based on the camera projection view, then you have the following matrix:

mat4 CameraView = RotationMatrix * TranslationMatrix

mat4 CameraProjectionView = CameraProjection * Camera View;

We want the point on world space
vec4 WorldPoint = CameraProjectionView.Inverse * MyPoint;

Now we want that point on light view projection space

vec4 LightVPPoint = LightProjection * LightView * WorldPoint;

You need to divide it by w to clip it.

LightVPPoint /= LightVPPoint.w

If you have the world point, you just need to multiple it by the light projection view.

If you want that light point to be a texture coord, you can do this:

vec4 LightVPPoint = BIAS * LightCameraProjection * LightCameraView * WorldPoint;

Where BIAS is a matrix4 which can be constructed by:

ScaleTransformation(0.5f,0.5f,0.5f) * TranslationTransformation(1,1,1)

It will just do the next operation with your vector: (v*0.5f + vec4(0.5))

Thank for your answers both of you. I think that I have a clearer vision of what to do.

But, I still have trouble, and I don’t know why.

My depth is render from the camera point of view, next I use shader to project this texture in a other texture in light space. I use a quad to render

Here my GLSL code (very basic)

Vertex shader


 #version 400
layout(location = 0) in vec3 position;
layout(location = 2) in vec2 texCoords;

out vec4 TexCoords;
uniform mat4 Tmat;

void main() {
	gl_Position = vec4(position.x, position.y, 0.0f, 1.0f);
	TexCoords = Tmat*gl_Position;
}

Fragment shader


#version 400

in vec4 TexCoords;


uniform sampler2D inputTexture;



void main()
{
	vec2 projCoords = TexCoords.xy / TexCoords.w;
	// Transform to [0,1] range
	projCoords = projCoords * 0.5 + 0.5;

	float depth = texture(inputTexture, projCoords).r;
	gl_FragDepth = depth;
}


Where Tmat is the matrix : Tmat = CameraProj * CameraEye * (CameraProj * CameraEye).Inverse

I need to convert the light coordinate of the quad into texture coordinate in camera space, maybe I didn’t understand the concept afterall.

I managed to get everything working, thnaks a lot for your help folks !

MrJack.