point light - shadow mapping

I use a forward rendering and I try to implement a shadow mapping for a point light. I have one fbo and a cube map to store 6 depth textures. I’m not sure how to render a depth to textures when using a cube map (for a directional light and a single depth texture everything works fine in my program). An initialization code looks like this:


glGenFramebuffers(1, &fbo);
glGenTextures(1, &cubeTex);
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glBindTexture(GL_TEXTURE_CUBE_MAP, cubeTex);
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X, 0, GL_DEPTH_COMPONENT16, 1024, 1024, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, NULL);
glTexImage2D(GL_TEXTURE_CUBE_MAP_NEGATIVE_X, 0, GL_DEPTH_COMPONENT16, 1024, 1024, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, NULL);
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_Y, 0, GL_DEPTH_COMPONENT16, 1024, 1024, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, NULL);
glTexImage2D(GL_TEXTURE_CUBE_MAP_NEGATIVE_Y, 0, GL_DEPTH_COMPONENT16, 1024, 1024, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, NULL);
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_Z, 0, GL_DEPTH_COMPONENT16, 1024, 1024, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, NULL);
glTexImage2D(GL_TEXTURE_CUBE_MAP_NEGATIVE_Z, 0, GL_DEPTH_COMPONENT16, 1024, 1024, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, NULL);
 
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_BASE_LEVEL, 0);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAX_LEVEL, 0);
 
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);

glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_CUBE_MAP_POSITIVE_X, cubeTex, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_CUBE_MAP_NEGATIVE_X, cubeTex, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_CUBE_MAP_POSITIVE_Y, cubeTex, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_CUBE_MAP_NEGATIVE_Y, cubeTex, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_CUBE_MAP_POSITIVE_Z, cubeTex, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_CUBE_MAP_NEGATIVE_Z, cubeTex, 0);

GLenum drawBuffers[] = {GL_NONE};
glDrawBuffers(1, drawBuffers);
glBindTexture(GL_TEXTURE_CUBE_MAP, 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);

Then when rendering I bind the fbo and I render a scene 6 times. I set perspective matrix and appropriate view matrix. I assume that when I render the scene first time, then depth values are sending to the positive_x part of cube map (when I render second time to negative_x and so on). When I render the scene first time a view matrix looks like this:


glm::lookAt(glm::vec3(0.0f, 1.0f, -27.0f), glm::vec3(1.0f, 1.0f, -27.0f), glm::vec3(0.0f, 1.0f, 0.0f));

where vec3(0.0f, 1.0f, -27.0f) is the point light position.

Next I unbind the fbo and I try to calculate which fragments are in the shadows but it looks like that I did something wrong when recording depth. Is something wrong with my depth recording ?

[QUOTE=Triangle;1243435]I’m not sure how to render a depth to textures when using a cube map…

An initialization code looks like this:[/QUOTE]
That generally looks reasonable, except for this:


glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_CUBE_MAP_POSITIVE_X, cubeTex, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_CUBE_MAP_NEGATIVE_X, cubeTex, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_CUBE_MAP_POSITIVE_Y, cubeTex, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_CUBE_MAP_NEGATIVE_Y, cubeTex, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_CUBE_MAP_POSITIVE_Z, cubeTex, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_CUBE_MAP_NEGATIVE_Z, cubeTex, 0);


Not doing any rendering between these has no effect AFAIK. A framebuffer has one depth attachment.

Then when rendering I bind the fbo and I render a scene 6 times. … I assume that when I render the scene first time, then depth values are sending to the positive_x part of cube map (when I render second time to negative_x and so on).

This sounds suspect.

See this working code here for how it works:

You do what you were doing above, but put your rendering for each face between them.

Also seem to remember you can instead use a geometry shader to render to all the faces at once (layered rendering) if you so desire.

Ok, I removed all FramebufferTexture2D functions from the above initialization code and placed them in the render loop. Now the render loop looks like this:
-bind framebuffer
-set viewport
-cull face front
-6x: FramebufferTexture2D with appropriate cube map face, clear depth buffer bit, render with apropriate view matrix
-unbind fbo
-clear color buffer bit and depth buffer bit
-set viewport
-cull face back
-bind depth texture(cube map)
-render scene using a camera view matrix and calculate shadows

Unfortunately I get a scene where everything is in the shadow.

For a single depth texture(not the cube map) my program works. Then I calculate shadows this way:


float shadow;
float depth = textureProj(ShadowMap, shadowCoord).x  // where ShadowMap is just sampler2D, and shadowCoord is vec4 (shadowBias * ProjectionMatrix * LightViewMatrix * ModelMatrix * vec4(position, 1.0);
if(depth <  (shadowCoord.z / shadowCoord.w))
{
	shadow = 0.0;
}
else
{
	shadow = 1.0;
}

Next I multiply a diffuse and specular color by the shadow. Alternatively I can add these line to the initialization code:


glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_COMPARE_REF_TO_TEXTURE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC, GL_LESS);  

and in the fragment shader use sampler2DShadow:

 
float shadow = textureProj(ShadowMap, shadowCoord);

For the cube map I try to do it this way (maybe here is something wrong):


float shadow;
vec3 lightToPixel = vertexWorldSpacePosition - pointLightWorldSpacePosition;
float depth = texture(ShadowMap, normalize(lightToPixel)); // where ShadowMap is samplerCube
if(depth < length(lightToPixel))
{
    shadow = 0.0;
}
else
{
    shadow = 1.0;
}

What do you think about that ?

Also seem to remember you can instead use a geometry shader to render to all the faces at once (layered rendering) if you so desire.

Yes, you can bind the entire cubemap using glFramebufferTexture( fbo, GL_DEPTH_ATTACHMENT, cubeTex, 0), and use a geometry shader that emits 18 vertices for each triangle - 3 to each gl_Layer, transformed by the appropriate view matrix for that face. This works well if you’re only drawing triangles, but I’d recommend getting the simple version (6 passes) working first.

[QUOTE=Triangle;1243536]…
-cull face front
…(render shadow maps)…
[/QUOTE]

A warning about this. Some developers say this back-face casting works well for volumetric objects and avoids the need for any biasing tricks. And it seems to work …some of the time, so you can actually be tricked into doing this.

But beware… it fails miserably (with often-blatent light leaks) when you apply it to objects that are not 1) closed (watertight), 2) convex, and 3) impenetrable (i.e. never rendered intersecting another object). When you have a light-space back face adjacent to a light-space front-face that’s supposed to be in-shadow right next to each other in the shadow map (even across different objects in your scene), you get a light leak.

On the edge of an object this is usually OK. But in the middle of what’s supposed to be an in-shadow region, this can be really objectionable. This artifact is more visible when you don’t employ shadow filtering that’s better than ordinary PCF. Having a large contrast between in-shadow and out-of-shadow colors/luminances and/or using a lower resolution shadow map helps highlight this artifact as well.

…thus the many acne-avoidance schemes (biasing, gradient, midpoint, second depth, normal offset, etc.).

For some discussion/pictures on this, see:

[QUOTE=Triangle;1243536]For the cube map I try to do it this way (maybe here is something wrong):


float shadow;
vec3 lightToPixel = vertexWorldSpacePosition - pointLightWorldSpacePosition;
float depth = texture(ShadowMap, normalize(lightToPixel)); // where ShadowMap is samplerCube
if(depth < length(lightToPixel))
       shadow = 0.0;
else
       shadow = 1.0;

What do you think about that ?[/QUOTE]
The depth values stored in the cube map are typically not going to be radial EYE-SPACE distances (or WORLD-SPACE distances – same thing here) for the light as your code is assuming but WINDOW-SPACE depth values for the light (think EYE-SPACE Z values mapped through CLIP-SPACE and NDC to WINDOW-SPACE).

See the working point-light shadow casting code I posted a link to for details. In fact, I’d suggest you compile and run it. That gives you something to test against and binary-search your problems toward.

Ok, I compiled and ran that example. First of all I changed a little bit the fragment shader code. Now the example code is more similar to my code. This line:


vec4 position_ls = light_view_matrix * camera_view_matrix_inv * position_cs;

might be expressed as:


vec4 position_ws = camera_view_matrix_inv * position_cs;
vec4 position_ls = vec4(position_ws.xyz - vec3(camera_view_matrix_inv * vec4(light_position, 1.0)), 1.0);

and that is in fact vertexWorldSpacePosition - pointLightWorldSpacePosition.

Next I guess that glsl 120 doesn’t have overloaded texture functions which are executed based on a sampler type. There is no something like textureCubeShadow function so the extension must be included when using a samplerCubeShadow sampler. Without using the extension I can write the code this way:


float result = textureCube(shadow, normalize(position_ls.xyz)).x;
float shadow ;
if(result < depth)
{
	shadow = 0.0;
}
else
{
	shadow = 1.0;
}

The problem in my code was in “length(lightToPixel)”. When I calculate the depth as in the example everything works fine. I understand this line:


float depth = (clip.z / clip.w) * 0.5 + 0.5;

but I have a problem with understanding the preceding code:


vec4 abs_position = abs(position_ls);
float fs_z = -max(abs_position.x, max(abs_position.y, abs_position.z));
vec4 clip = light_projection_matrix * vec4(0.0, 0.0, fs_z, 1.0);

Can you give me some explanation how it works ? Is it the only way to calculate vec4 clip ?

[QUOTE=Triangle;1243831]This line:


vec4 position_ls = light_view_matrix * camera_view_matrix_inv * position_cs;

might be expressed as:


vec4 position_ws = camera_view_matrix_inv * position_cs;
vec4 position_ls = vec4(position_ws.xyz - vec3(camera_view_matrix_inv * vec4(light_position, 1.0)), 1.0);

[/QUOTE]

No. These are not equivalent.

The first computes the light’s EYE-SPACE position of the fragment.

The second computes the WORLD-SPACE vector from the light to the position of the fragment (assuming that light_position is the camera’s EYE-SPACE position of the light). And then just tacks on a .w=1 as if this was a position in some space (it’s not).

Draw a simple 2D diagram or plug in some real numbers to see why these are not equivalent.

The first takes into account the change in basis from WORLD-SPACE to light’s EYE-SPACE. The second does not.

Yeah. In GLSL 1.2 and earlier, to do a shadow texture cubemap lookup with depth comparison, you’d use shadowCube() on samplerCubeShadow sampler type. In 1.3 and later, use simply texture() on a samplerCubeShadow.

[QUOTE=Triangle;1243831]I have a problem with understanding the preceding code:


vec4 abs_position = abs(position_ls);
float fs_z = -max(abs_position.x, max(abs_position.y, abs_position.z));
vec4 clip = light_projection_matrix * vec4(0.0, 0.0, fs_z, 1.0);

Can you give me some explanation how it works ?[/QUOTE]

Let’s get the whole snippet on the table:


    vec4 position_ls = light_view_matrix * camera_view_matrix_inv * position_cs;
    vec4 abs_position = abs(position_ls);
    float fs_z = -max(abs_position.x, max(abs_position.y, abs_position.z));
    vec4 clip = light_projection_matrix * vec4(0.0, 0.0, fs_z, 1.0);
    float depth = (clip.z / clip.w) * 0.5 + 0.5;
    vec4 result = shadowCube(shadow, vec4(position_ls.xyz, depth));

First line gives you the light’s EYE-SPACE position of the fragment (for the forward face of the cubemap).

The next two lines find the component of that position that is largest in absolute magnitude. The largest one determines which pair of cubemap faces you’re going to be looking up into (largest == Z implies front or back cubemap face, largest = X implies left or right cubemap face, etc.). On whichever face it is, you know the light’s EYE-SPACE position for that cubemap face specifically is going to have a Z value == -largest_value.

And so…

The fourth line transforms that light’s EYE-SPACE depth value on that face to light’s CLIP-SPACE on that face, and then…

The fifth line takes that on through light’s NDC (-1…1) to light’s WINDOW-SPACE (0…1).

Note that this operation just plugs in a light’s EYE-SPACE X and Y value of 0,0 for the fragment because X and Y make absolutely no difference in the computation of the light’s WINDOW-SPACE Z value for the fragment.

Is it the only way to calculate vec4 clip ?

Not sure what you mean.

Implicit in this math is that this is a point light source casting shadows, which uses a perspective projection to map space to the cube map. You could just expand terms from the projection matrix (hint: you only need the last 2 columns, since X,Y=0,0), simplify, and come up with a simple expression to compute the light’s WINDOW-SPACE Z value from the light’s EYE-SPACE Z value. Same operation but avoids a full matrix-vector multiply.

Thanks for explanations. I agree that in general those lines of the code are not equivalent (for example light_view_matrix may contain some rotation)
But in this particular case the light_view_matrix contains only a translation:


glLoadIdentity();
glTranslatef(-light_position_ws[0], -light_position_ws[1], -light_position_ws[2]);
glGetFloatv(GL_MODELVIEW_MATRIX, light_view_matrix);

The light_view_matrix looks like this:


1		0 		0		0



0		1		0		0



0		0		1		0



-light_posX	-light_posY	-light_posZ	1

So those lines of code give me the same result when I run the program. If I’m still wrong please give some example.