CSM / PSSM - Depth comparison

I’m still a bit unsure about the implementation of cascaded shadow mapping.
This is what it looks like at the moment:

At the top left are the 4 cascades - All of them encompass the same area for testing purposes.
This is without GL_COMPARE_REF_TO_TEXTURE, so all I’m doing is a depth lookup into the shadow map and multiplying the result with the diffuse fragment color. (Hence the different shadow intensities)

If I activate the comparison by setting the compare mode to GL_COMPARE_REF_TO_TEXTURE, this is the result:

This works, however, as you can see in the screenshot, there are several shadows that obviously don’t belong there.

Here are my shaders:

Vertex Shader:


#version 330 core

layout(location = 0) in vec4 vertPos;
layout(location = 1) in vec2 vertexUV;

layout(std140) uniform ViewProjection
{
	mat4 M;
	mat4 V;
	mat4 P;
	mat4 MVP;
};

out vec4 Position_worldspace;
out vec4 Position_cameraspace;

out vec2 UV;

void main()
{
	gl_Position = MVP *vertPos;
	Position_worldspace = M *vertPos;
	Position_cameraspace = V *M *vertPos;
	
	UV = vertexUV;
}

Fragment Shader:


#version 330 core

layout(std140) uniform CSM
{
	vec4 csmFard;
	mat4 csmVP[4];
	int numCascades;
};

uniform sampler2D diffuseMap;
uniform sampler2DArrayShadow csmTextureArray;

in vec4 Position_worldspace;
in vec4 Position_cameraspace;

in vec2 UV;

out vec4 color;

float GetShadowTerm(sampler2DArrayShadow shadowMap)
{
	int index = numCascades -1;
	mat4 vp;
	for(int i=0;i<numCascades;i++)
	{
		if(gl_FragCoord.z < csmFard[i])
		{
			vp = csmVP[i];
			index = i;
			break;
		}
	}
	vec4 shadowCoord = vp *Position_worldspace;
	
	shadowCoord.w = shadowCoord.z;
	shadowCoord.z = float(index);
	shadowCoord.x = shadowCoord.x *0.5f +0.5f;
	shadowCoord.y = shadowCoord.y *0.5f +0.5f;
	return shadow2DArray(shadowMap,shadowCoord).x;
}

void main()
{
	color = texture2D(diffuseMap,UV).rgba;
	color.rgb *= GetShadowTerm(shadowMap);
}

This is essentially the same implementation as described in this document.
The only note-worthy difference, as far as I can tell, is in their shadow-map-lookup function:

float shadowCoef()
{
	int index = 3;
	// find the appropriate depth map to look up in
	// based on the depth of this fragment
	if(gl_FragCoord.z < far_d.x)
		index = 0;
	else if(gl_FragCoord.z < far_d.y)
		index = 1;
	else if(gl_FragCoord.z < far_d.z)
		index = 2;

	// transform this fragment's position from view space to
	// scaled light clip space such that the xy coordinates
	// lie in [0;1]. Note that there is no need to divide by w
	// for othogonal light sources
	vec4 shadow_coord = gl_TextureMatrix[index]*vPos;
	// set the current depth to compare with
	shadow_coord.w = shadow_coord.z;

	// tell glsl in which layer to do the look up
	shadow_coord.z = float(index);

	// let the hardware do the comparison for us
	return shadow2DArray(stex, shadow_coord).x;
}

More specifically, this line in particular:

vec4 shadow_coord = gl_TextureMatrix[index]*vPos;

They’re using the view-space position, where I’m using the world-space position. In my case it only looks “right” with the world-space position, I can only guess that means my shadow matrices are incorrect?
The projection matrix for all cascades is currently calculated like this:


glm::vec3 min(-1024.f,-512.f,-1024.f); // Area, in which all shadow casters are located
glm::vec3 max(1024.f,512.f,1024.f);
glm::mat4 matProj = glm::ortho(min.z,max.z,min.x,max.x,-max.y,-min.y);

As for the view matrix:


glm::vec3 pos = glm::vec3(176.f,432.f,-390.f);
glm::vec3 dir = glm::vec3(0.f,-1.f,0.f);
glm::mat4 matView = glm::lookAt(
	pos,
	pos +dir,
	glm::vec3(1.f,0.f,0.f)
);

‘pos’ being the origin of the light (Which shouldn’t matter(?), since we’re using an orthographic projection), and ‘dir’ is its direction (Straight down).

What am I missing?

[QUOTE=Silverlan;1263689]If I activate the comparison by setting the compare mode to GL_COMPARE_REF_TO_TEXTURE, this is the result:

This works, however, as you can see in the screenshot, there are several shadows that obviously don’t belong there.[/QUOTE]

Why do you say that? Since your walls are very thin, if the light source is directly above, the result looks like it could be correct to me.

They’re using the view-space position, where I’m using the world-space position. In my case it only looks “right” with the world-space position, I can only guess that means my shadow matrices are incorrect?

There’s no single golden way to build a shadow coordinate transform matrix. A matrix is just a transform from one coordinate space to another. You can build a transform (matrix) from world-space to the light’s clip-space, and then multiply world-space positions by it. Or you can build a transform from camera eye-space to the light’s clip-space, and then multiply camera eye-space positions by it. Just be consistent.

The reason that world-space isn’t commonly used as the source coordinate space for shadow coordinate transforms is that quite often the shader doesn’t do any operations in world space at all! Sometimes it can’t; for instance, if world-space is too big to represent with single-precision floats to the desired spatial accuracy. So often what you do will be to use your MODELVIEW transform to get the positions fed into the shader into eye-space, and then your shadow transform will take it from there into light’s clip-space.

The walls have the same width as the small block near the center of the image.
If I move the ‘origin’ of the light source upwards, the artifacts become even more noticable:

I still get the same effect if I increase the size of one of the blocks:
http://puu.sh/es4z7/b23f755655.jpg

In the above case the bounds of the projection matrix didn’t quite encompass all objects, but even if I increase the bounds on the y-axis to encompass all of them, some of the artifacts stay:
http://puu.sh/es4V9/2776317fb8.jpg

What could be the cause of that?

[QUOTE=Dark Photon;1263692]There’s no single golden way to build a shadow coordinate transform matrix. A matrix is just a transform from one coordinate space to another. You can build a transform (matrix) from world-space to the light’s clip-space, and then multiply world-space positions by it. Or you can build a transform from camera eye-space to the light’s clip-space, and then multiply camera eye-space positions by it. Just be consistent.

The reason that world-space isn’t commonly used as the source coordinate space for shadow coordinate transforms is that quite often the shader doesn’t do any operations in world space at all! Sometimes it can’t; for instance, if world-space is too big to represent with single-precision floats to the desired spatial accuracy. So often what you do will be to use your MODELVIEW transform to get the positions fed into the shader into eye-space, and then your shadow transform will take it from there into light’s clip-space.[/QUOTE]
I’ve always wondered about that. Thanks, that makes a lot more sense now.

Oh, I think I understand what you’re talking about now. You didn’t say what you meant by “there are several shadows that obviously don’t belong there”. Do you mean the self-shadows on the lower-half of the objects? If you change the light source direction, does the artifact go away?

If the light source is truly overhead in this case, typically you wouldn’t see this shadowing artifact very prominently. The object is shadowing itself slightly due to the light source not being straight up, the box sides not being straight up, floating point imprecision, or something. The reason this wouldn’t be very prominent is diffuse has an N*L term, which in this case would be ~cos(90) == 0. So when you shadow the diffuse component, you’re attenuating a diffuse of 0, so it doesn’t really make any difference. Specular is generally clamped or attenuated in this case too.

[QUOTE=Dark Photon;1263727]Oh, I think I understand what you’re talking about now. You didn’t say what you meant by “there are several shadows that obviously don’t belong there”. Do you mean the self-shadows on the lower-half of the objects? If you change the light source direction, does the artifact go away?

If the light source is truly overhead in this case, typically you wouldn’t see this shadowing artifact very prominently. The object is shadowing itself slightly due to the light source not being straight up, the box sides not being straight up, floating point imprecision, or something. The reason this wouldn’t be very prominent is diffuse has an N*L term, which in this case would be ~cos(90) == 0. So when you shadow the diffuse component, you’re attenuating a diffuse of 0, so it doesn’t really make any difference. Specular is generally clamped or attenuated in this case too.[/QUOTE]
Thank you, that makes sense, I’ll have to do some more testing on that.
However, that doesn’t explain this behavior:
[video=youtube_share;1sghFGdwZVQ]http://youtu.be/1sghFGdwZVQ[/video]

The bounds of the projection matrix go beyond the floor, so it shouldn’t disappear like you can see at about 0:05.
The shadow on the block only appears if I move a fair bit away from it.
Both of these still look like a depth comparison problem to me?

What I see in the video is that your shadow frusta are not encompassing all potential casters. Ensure that your shadow caster cull is going back far enough to catch them all. The shadow frusta is what is used to determine your shadow projection transforms. Make sure that your shadow frusta projections encompass the casters. Later on you can experiment with things like capping it to the bounds of the view frustum.

Also, how your eyepoint model (a person?) is casting shadows on the ground but then not on the block – that’s interesting. Possibly a shadow split difference. You need to think about how you’re going to handle this case (where objects cast shadows into multiple splits). Are you casting them into all splits now? It would suggest that perhaps that eyepoint model is being cast into some shadow splits but not others. For starters, suggest you cast all objects than potentially cast shadows into a split, even if it means they’re cast into multiple splits.

Sorry, the video didn’t show it very clearly. Here is the same scene again with some debugging information:
[video=youtube_share;jvXxLbG3mOs]http://youtu.be/jvXxLbG3mOs[/video]

At the top are the 4 cascades. All shadow frusta cover the same arbitrary area (Again, just for testing purposes).
Additionally each cascade has a different coloration, to make the transitions visible.
As you can see, all shadow casters within the shadow frustra are rendered correctly into the shadow maps throughout the video - Even when/after the shadow of the player character disappears. (Please ignore the medium-sized block popping in and out during the video - That’s the culling algorithm kicking in)
The transition between the cascades around the small block happens before the shadow disappears, so I think we can rule that out.

Ok, that video makes it look like there’s a problem with your shadow map or your shadow map math. Because sometimes your character is casting shadows on an object between it and the light and then it “pops” and it’s not.

Started picking through your frag shader to try and help you out, but it’s apparent that that’s not the shader you’re using at all as that one won’t compile (e.g. this line: "
color.rgb *= GetShadowTerm(shadowMap);"). There’s no “shadowMap” in the global scope. But there is a “csmTextureArray”.

Anyway, doing only a quick scan:


	shadowCoord.w = shadowCoord.z;
	shadowCoord.z = float(index);
	shadowCoord.x = shadowCoord.x *0.5f +0.5f;
	shadowCoord.y = shadowCoord.y *0.5f +0.5f;


it appears that you forgot shift-and-scale your Z coordinate from -1…1 to 0…1 for the depth comparison as well.

It’s possible that this "off by 2X factor is what’s causing your self-shadow artifact half-way down the sides of your objects.

[QUOTE=Dark Photon;1263768]
Started picking through your frag shader to try and help you out, but it’s apparent that that’s not the shader you’re using at all as that one won’t compile (e.g. this line: "
color.rgb *= GetShadowTerm(shadowMap);"). There’s no “shadowMap” in the global scope. But there is a “csmTextureArray”.[/QUOTE]

I trimmed the shader down to the essential parts for rendering the shadows, the actual shader has a lot more to it which doesn’t have anything to do with the shadows, I figured it would be easier to skim through it that way.

[QUOTE=Dark Photon;1263768]
Anyway, doing only a quick scan:


	shadowCoord.w = shadowCoord.z;
	shadowCoord.z = float(index);
	shadowCoord.x = shadowCoord.x *0.5f +0.5f;
	shadowCoord.y = shadowCoord.y *0.5f +0.5f;


it appears that you forgot shift-and-scale your Z coordinate from -1…1 to 0…1 for the depth comparison as well.[/QUOTE]
Thank you! I didn’t know it was necessary to do that for the depth as well, that did the trick!

However some new problems arose when I tried using the minimal enclosing sphere approach for the frustum projection matrices:

Everything up until 00:12 is using a static bias of 0.001. It looks alright, but can lead to unpredictable artifacts if I change the light direction.
At 00:12, I’ve switched to a slope bias, which I’m using for spot-light-sources as well:

float bias = 0.001 *tan(acos(cosTheta));

‘cosTheta’ being the dot product between the vertex normal and the light direction.

In the video the light direction is (0,-1,0) and the vertex normal is (0,1,0), so the bias is simply 0.
This leads to extreme flickering, but only as long as the light’s direction is straight down. I could easily fix this by not allowing the bias to go below a certain threshold, but is that the ‘proper’ way to do it?

Additionally, at the end of the video (starting at 00:30), you can see jumps in the quality of the shadow during the transition between cascades. I’m using PCF for soft shadows, what can I do to make these transitions less noticeable?

I just stumbled upon another odd problem when using shadow-maps for multiple different light sources.
Using this (fragment) shader:


uniform sampler2D diffuseMap;
in vec2 UV;
in vec4 Position_worldspace;
in vec4 Position_cameraspace;
in vec3 Normal_modelspace;

layout(std140) uniform ViewProjection
{
	mat4 M;
	mat4 V;
	mat4 P;
	mat4 MVP;
};

// Light Data
const int MAX_LIGHTS = 8;
uniform int numLights;
layout (std140) uniform LightSourceBlock
{
	mat4 depthMVP;
	int shadowMapID;
	int type;
	vec3 position;
	vec4 color;
	float dist;
	vec3 direction;
	
	// Spotlights
	float cutoffOuter;
	float cutoffInner;
	float attenuation;
} LightSources[MAX_LIGHTS];
//

// Shadow Data
// CSM
layout(std140) uniform CSM
{
	vec4 csmFard;
	mat4 csmVP[4];
	int numCascades;
};
uniform sampler2DArrayShadow csmTextureArray;
//

// Spot- and Point-lights
in vec4 shadowCoord[MAX_LIGHTS];
uniform sampler2DShadow shadowMaps[1]; // shadowMaps[MAX_LIGHTS]
//
//

out vec4 color;
void main()
{
	color = texture2D(diffuseMap,UV).rgba;
	
	vec3 N = -normalize(Normal_modelspace);	
	// Spotlight
	int lightIdx = 0;
	vec3 posFromWorldSpace = LightSources[lightIdx].position -Position_worldspace.xyz;
	vec3 dirToLight = normalize(posFromWorldSpace);
	float lambertTerm = max(dot(N,-dirToLight),0.0);
	if(lambertTerm > 0.0)
	{
		vec3 lightDir = normalize(LightSources[lightIdx].direction);
		float angle = dot(normalize(lightDir),-dirToLight);
		float acosAngle = acos(max(angle,0));
		if(acosAngle <= LightSources[lightIdx].cutoffOuter)
		{
			vec4 v = vec4(shadowCoord[lightIdx].xyz,shadowCoord[lightIdx].w +0.01);
			float s = shadow2DProj(shadowMaps[LightSources[lightIdx].shadowMapID],v).w;
			color.rgb *= s;
		}
	}
	//
	
	// Directional Light
	lightIdx = 1;
	vec3 l = normalize(LightSources[lightIdx].direction);
	float cosTheta = max(dot(N,l),0.0);
	float bias = max(0.001 *tan(acos(cosTheta)),0.00001);

	int index = numCascades -1;
	mat4 vp;
	for(int i=0;i<numCascades;i++)
	{
		if(gl_FragCoord.z < csmFard[i])
		{
			vp = csmVP[i];
			index = i;
			break;
		}
	}
	vec4 shadowCoord = vp *Position_worldspace;
	shadowCoord.w = shadowCoord.z *0.5f +0.5f -bias;
	shadowCoord.z = float(index);
	shadowCoord.x = shadowCoord.x *0.5f +0.5f;
	shadowCoord.y = shadowCoord.y *0.5f +0.5f;
	
	float s = shadow2DArray(csmTextureArray,shadowCoord).x;
	color.rgb *= s;
	//
}

I get the expected result:
http://puu.sh/eGBHp/2c9aa69285.jpg

The scene has one spot-light (Shadows on the wall, Index 0 in ‘LightSources’) and one directional light (Shadows on the ground, Index 1 in ‘LightSources’).
However, if I increase the ‘shadowMaps’ array by any amount (e.g. ‘uniform sampler2DShadow shadowMaps[2]’), this happens:
http://puu.sh/eGBJp/e219fd9fde.jpg

I haven’t changed anything but the size of the array. The spot-light shadows are still rendered properly, but the CSM-shadows suddenly seem to skip their depth comparison.
This happens even if I take the spot-light out of the equation (But still leaving the size of the texture-array at ‘2’):
http://puu.sh/eGBLb/4268cc9cec.jpg

The texture unit indices for ‘shadowMaps’ start at 20, the texture unit index for ‘csmTextureArray’ is 36, they definitely don’t overlap.
What could be causing this?

[QUOTE=Silverlan;1263815]Everything up until 00:12 is using a static bias of 0.001. It looks alright, but can lead to unpredictable artifacts if I change the light direction.
At 00:12, I’ve switched to a slope bias, which I’m using for spot-light-sources as well:

float bias = 0.001 *tan(acos(cosTheta));

‘cosTheta’ being the dot product between the vertex normal and the light direction.[/QUOTE]

So… Let’s suppose the light and normal are aligned. This will results in 0 bias. Which may help explain why you seem to be getting really bad self-shadowing artifacts (I assume you are casting light-space front-faces into the shadow map).

You might look into some other techniques, such as Normal Offset Shadows (just websearch for it). In my experience, it seems to perform better.

And before you trip over some “incomplete” info on the net that says casting back-faces is the solution, take a look at this.

[QUOTE=Dark Photon;1263841]So… Let’s suppose the light and normal are aligned. This will results in 0 bias. Which may help explain why you seem to be getting really bad self-shadowing artifacts (I assume you are casting light-space front-faces into the shadow map).

You might look into some other techniques, such as Normal Offset Shadows (just websearch for it). In my experience, it seems to perform better.[/QUOTE]
Thanks, I will try that as soon as I’ve solved the problem I mentioned in my other post.
I narrowed the problem down to the spotlight shadow map. If I bind it to my shader in any way, my cascaded shadow maps aren’t rendered correctly anymore.
I’ve recorded a video of it with some more explanation (Annotations have to be turned on):

This doesn’t make any sense to me. The spotlight shadow map is a valid 2D depth texture, nothing special about it.
The cascaded shadow maps have no relation to the spotlight shadow map at all.
What’s going on here?

After some more testing, I found the issue. Here are the declarations for all of the shadow maps in my fragment shader:


uniform sampler2DArrayShadow csmTextureArray;
uniform sampler2DShadow shadowMaps[MAX_LIGHTS]; // MAX_LIGHTS is set to 8 in both the shader and the engine

I have one directional light and one spot-light.
The 3D depth texture of the directional light is bound to ‘csmTextureArray’.
The 2D depth texture of the spot-light is bound to ‘shadowMaps[0]’.

This causes the issue shown in my previous post.
However, if I bind the 2D depth texture of the spot-light to ALL samplers in ‘shadowMaps’ (from 0 to MAX_LIGHTS), the problem doesn’t occur anymore.
So, in essence, it seems that ALL of the samplers have to be bound to a valid texture, even if the sampler isn’t being used in the shader during the render pass.

This is still somewhat a problem however. If I don’t have any spot-lights (and therefore no 2D depth textures), I don’t have anything I can bind to the samplers in the array. This means I would have to create a dummy depth texture, which exists at all times. I would also have to create a dummy 3D texture and cubemap texture (To account for the point-light sampler array later on).

I’d rather avoid that if possible, is there anything else I could try?

I think you just need to take care that the unused samplers aren’t bound to a texture unit used in the shader. To be more precise it is an error to have samplers of different types be pointing to the same texture unit. glValidateProgram will fail in such cases.
Take a look into the wiki.
I didn’t find another solution but binding unused samplers to unused image units myself, although I never ran into problems simply not assigning them besides the ValidateProgram failure.

Edit: There doesn’t need to be any texture bound to the units for ValidateProgram to succeed.

[QUOTE=hlewin;1264184]I think you just need to take care that the unused samplers aren’t bound to a texture unit used in the shader. To be more precise it is an error to have samplers of different types be pointing to the same texture unit. glValidateProgram will fail in such cases.
Take a look into the wiki.
I didn’t find another solution but binding unused samplers to unused image units myself, although I never ran into problems simply not assigning them besides the ValidateProgram failure.

Edit: There doesn’t need to be any texture bound to the units for ValidateProgram to succeed.[/QUOTE]
Thanks, but they’re all bound to different units already.
I didn’t know about ‘glValidateProgram’, but it returns “Program validation succeeded!”.