Deferred rendering and light position

Hello!

I’m writing a deferred renderer and run into difficulties with light position in lighting step.
Problem is that light moves with a camera even if I multiply light position with camera matrix. It moves less with multiplying with camera matrix but it still moves. Yes I know that there are lots of threads like this but I didn’t get it solved with two days.

I use MRT in deferred renderer: color, normal and depth buffers. In lighting shader I reconstruct position from depth.

*** My rendering pipeline ***

  • Clear screen
  • Start rendering to MRT
  • Move camera
  • Get camera matrix
  • Draw geometry
  • Finish rendering MRT
  • Bind lighting shader
  • Send light parameters (like light position multiplied with camera matrix) to shader
  • Draw fullscreen quad
  • Unbind lighting shader
  • Swap buffers

This is for camera matrix
glGetFloatv(GL_MODELVIEW_MATRIX, m);

*** Fragment shader (for directional light) ***

uniform vec2 resolution;
uniform vec2 planes;

uniform sampler2D color_buffer;
uniform sampler2D depth_buffer;
uniform sampler2D normal_buffer;

uniform vec3 light_ambient;
uniform vec3 light_diffuse;
uniform vec3 light_specular;
uniform vec4 light_position;
uniform float attenuation;

vec3 light_vector;
float nv_dot_lv;
vec3 reflect_vector;
float rv_dot_pp;

vec3 lighting;

float linearize_depth(float z) {
	return planes.x / (planes.y - z * (planes.y - planes.x)) * planes.y;
}
void main() {
	vec3 color = texture2D(color_buffer, gl_TexCoord[0].xy).rgb;
	float depth = texture2D(depth_buffer, gl_TexCoord[0].xy).r;
	vec3 normal = texture2D(normal_buffer, gl_TexCoord[0].xy).rgb;

	vec4 position;
	position = vec4(
		((gl_FragCoord.x / resolution.x) - 0.5) * 2.0,
		((-gl_FragCoord.y / resolution.y) + 0.5) * 2.0 / (resolution.x / resolution.y),
		linearize_depth(depth),
		1
	);
	position.x *= position.z;
	position.y *= -position.z;

	light_vector = normalize(light_position.xyz - position.xyz);
	nv_dot_lv = dot(normal, light_vector);
	reflect_vector = normalize(reflect(-light_vector, normal));
	rv_dot_pp = dot(reflect_vector, normalize(-position.xyz));

	gl_FragColor = vec4(color, 1);

	if(nv_dot_lv > 0.0) {
		lighting = vec3(attenuation * light_ambient);
		lighting += vec3(attenuation * light_diffuse * nv_dot_lv);
		lighting += vec3(attenuation * light_specular * pow(rv_dot_pp, 16));
		gl_FragColor *= vec4(lighting, 1);
	}

	
}

Why my light still moves when I move around the scene?
Please help!

Why my light still moves when I move around the scene?

Because your position is in clip space when your light is in camera space.

Here is some code that does this sort of thing (reconstructing the camera-space position of a fragment) correctly.

Thanks for your reply.
Could you tell me what is the clipToCameraMatrix and how do I get it?

Could you tell me what is the clipToCameraMatrix and how do I get it?

It is the inverse of the cameraToClip matrix (the matrix that transforms positions from camera-space to clip-space). Or, using fixed-function parlance, it is the inverse of the GL_PROJECTION matrix.

What you may be finding confusing is that Alfonse isn’t using standard OpenGL terminology for spaces and transforms, as denoted here for instance:

http://www.songho.ca/opengl/gl_transform.html
http://www.songho.ca/opengl/files/gl_transform02.png

or here:

http://glprogramming.com/red/chapter03.html
http://glprogramming.com/red/images/Image49.gif

or Section 2.12 (and Figure 2.9) in the latest GL spec.

OBJECT SPACE -> [[ MODELING TRANSFORM ]] -> WORLD SPACE -> [[ VIEWING TRANSFORM ]] -> EYE SPACE -> [[ PROJECTION TRANSFORM ]] -> CLIP SPACE -> [[ PERSPECTIVE DIVIDE ]] -> NDC SPACE

He’s calling “EYE SPACE” camera space. He also tends to call “OBJECT SPACE” model space.

Using Reinharts Example 9.9 method I have no shading (only diffuse color) if I multiply clipPos and inverse of the projection matrix.


glGetFloatv(GL_PROJECTION_MATRIX, m);
clipToCameraMatrix = inverse(mat4(m));
Click to reveal.. ]
``` float determinant(mat4 m) { float ret; ret = m[0] * m[5] * m[10]; ret += m[4] * m[9] * m[2]; ret += m[8] * m[1] * m[6]; ret -= m[8] * m[5] * m[2]; ret -= m[4] * m[1] * m[10]; ret -= m[0] * m[9] * m[6]; return ret; } mat4 inverse(mat4 m) { float idet = 1.0f / determinant(m); mat4 ret; ret[0] = (m[5] * m[10] - m[9] * m[6]) * idet; ret[1] = -(m[1] * m[10] - m[9] * m[2]) * idet; ret[2] = (m[1] * m[6] - m[5] * m[2]) * idet; ret[3] = 0.0; ret[4] = -(m[4] * m[10] - m[8] * m[6]) * idet; ret[5] = (m[0] * m[10] - m[8] * m[2]) * idet; ret[6] = -(m[0] * m[6] - m[4] * m[2]) * idet; ret[7] = 0.0; ret[8] = (m[4] * m[9] - m[8] * m[5]) * idet; ret[9] = -(m[0] * m[9] - m[8] * m[1]) * idet; ret[10] = (m[0] * m[5] - m[4] * m[1]) * idet; ret[11] = 0.0; ret[12] = -(m[12] * ret[0] + m[13] * ret[4] + m[14] * ret[8]); ret[13] = -(m[12] * ret[1] + m[13] * ret[5] + m[14] * ret[9]); ret[14] = -(m[12] * ret[2] + m[13] * ret[6] + m[14] * ret[10]); ret[15] = 1.0; return ret; } ``` [/QUOTE]

Alfonse Reinheart, where is float value of depth texture in your code?

Dark Photon, I found a piece of your code:

vec3 PositionFromDepth_DarkPhoton(in float depth)
{
  vec2 ndc;             // Reconstructed NDC-space position
  vec3 eye;             // Reconstructed EYE-space position

  eye.z = near * far / ((depth * (far - near)) - far);

  ndc.x = ((gl_FragCoord.x * widthInv) - 0.5) * 2.0;
  ndc.y = ((gl_FragCoord.y * heightInv) - 0.5) * 2.0;

  eye.x = ( (-ndc.x * eye.z) * (right-left)/(2*near)
            - eye.z * (right+left)/(2*near) );
  eye.y = ( (-ndc.y * eye.z) * (top-bottom)/(2*near)
            - eye.z * (top+bottom)/(2*near) );

  return eye;
}

Can this do the trick for me? What are top,bottom,left and right?

Alfonse Reinheart, where is float value of depth texture in your code?

There is no depth texture, since the code isn’t doing deferred rendering. It’s simply generating the camera-space position based on gl_FragCoord. Since gl_FragCoord.z is the fragment’s depth, which is what is written into the depth buffer, that should be the Z value you get back when doing your deferred rendering pass.

I get same results with your code. And if I multiply inverse of projection matrix and clip position then I get no shaing (like pixel positions are zeros; nothing when visualizing NdotL). Lighting comes from right direction but moves more or less with camera.

Now the question is: How to get correct clipToCameraMatrix (inverse of projection matrix)? I have tried gl_ProjectionMatrixInverse in GLSL and manually passing inverse of projection matrix.

I didn’t found a tutorial or thread where somebody uses projection matrix. Also it will cause huge framerate drop.

This is what I have got now:

	float depth = texture2D(depth_buffer, texCoord.st).r;

	vec3 position;
	position.x = ((gl_FragCoord.x / resolution.x) - 0.5) * 2.0;
	position.y = ((gl_FragCoord.y / resolution.y) - 0.5) * 2.0;
	position.z = planes.x * planes.y / ((depth * (planes.y - planes.x)) - planes.y);

	float a = resolution.y / resolution.x;
	float top =  planes.x * tan(fov * 0.5);
	float right = top * a;

	position.x *= -position.z * right / (1.0 / planes.x);
	position.y *= -position.z * top / (1.0 / planes.x);

If i go next to the object and rotate camera then shading changes a bit. When looking down then surfaces are brighter, when looking up then surfaces are darker (looks right for me).

Any suggestions please! :slight_smile:

I have tried to implement deferred rendering several months already and still no luck :frowning:
I don’t even have anything new to read and download in the net already.

Depth component

glGenTextures(1, &depth);
glBindTexture(GL_TEXTURE_2D, depth);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32, width, height, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, NULL);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, depth, 0);
glBindTexture(GL_TEXTURE_2D, 0);

Shaders are in previous posts.

It’s obvious that your problem is with your matrices, not your depth. If there was a problem with depth, the lighting would be really wrong, whereas it simply is moving relative to the camera rather than the world.

Bug must be in lighting fragment shader, because in application, after moving camera, I get modelview matrix and multiply light position with it. This is what holds light ALMOST fixed.

For directional light:

float lz(float depth) {
	return (2.0 * planes.x) / (planes.y + planes.x - depth * (planes.y - planes.x));
}
void main() {
	vec3 color = texture2D(color_buffer, gl_TexCoord[0].st).rgb;
	float depth = texture2D(depth_buffer, gl_TexCoord[0].st).r;
	vec3 normal = texture2D(normal_buffer, gl_TexCoord[0].st).rgb;

	vec4 position = vec4(0,0,0,1);
	position.x = ((gl_FragCoord.x / resolution.x) - 0.5) * 2.0;
	position.y = ((-gl_FragCoord.y / resolution.y) + 0.5) * 2.0;
	position.z = -lz(depth);
	position.x *= -position.z;
	position.y *= position.z;

	// position = invproj * position; // nothing changes
	// position = gl_ProjectionMatrixInverse * position; // nothing changes

	light_vector = vec3(normalize(light_position - position));
	nv_dot_lv = dot(normal, light_vector);
	gl_FragColor = vec4(vec3(nv_dot_lv), 1);
}

This is for visualising my problem:

Surfaces goes a bit brighter when looking down. If just moving around and not rotating, then light stays fixed.

SOLVED!!!

My normals were buggy. Below is the correct code:

// Gbuffer vertex
normal_vector = (gl_NormalMatrix * gl_Normal).xyz;

// Gbuffer fragment
gl_FragData[1] = vec4(normalize(normal_vector) * 0.5 + 0.5, 1);

// Lighting fragment
vec3 normal = texture2D(normal_buffer, gl_TexCoord[0].st).rgb * 2.0 - 1.0;

Thanks to all who tried to help me! Lesson learned :slight_smile: