GLSL ShadowMap

Hey guys.
I’m trying to implement shadowmaping and I got stuck at a point.
I use defered lighting so this might be a little tricky.
I got my shadowmap rendered like it should I have it in my shader. I also have the projection matrix(lightmatrix*viewmatrix)
Here is the frag code of my spot light(only one shadowmap)

uniform sampler2D depthTex;
uniform sampler2D normalTex;
uniform sampler2D colorTex;
uniform sampler2DShadow ShadowMap;

uniform float near;
uniform float far;
uniform vec2 bufferSize;
uniform vec3 lightPos;
uniform vec3 lightColor;
uniform float lightRange;
uniform float SpotCutoff;
uniform vec3 SpotDir;

uniform mat4 ProjectionMatrix;
uniform mat4 LightMatrix;

float DepthToZPosition(in float depth) 
{
	return near / (far - depth * (far - near)) * far;
}

vec3 lighting(in vec3 SColor,in vec3 SPos,in float SRadius,in vec3 p,in vec3 n,in vec3 MDiff,in vec3 MSpec,in float MShi)
{
	
	vec3 l = SPos-p;
	
	vec3 sd = normalize(SpotDir);
	float spotEffect = dot(normalize(l),-sd);
	if(spotEffect > cos(45.0))
	{
		vec3 v = normalize(p);
		vec3 h = normalize(v+l);
		float att = max(0.0,1.0-length(l)/SRadius);
		l = normalize(l);
		vec3 Idiff = max(0.0,dot(n,l))*MDiff*SColor;

		vec3 Ispec =pow(max(0.0,dot(n,h)),MShi)*MSpec*SColor;
		return att*(Idiff+Ispec);
	}
	else
	{
		return vec3(0,0,0);
	}	
}

void main()
{
	vec2 texCoord = gl_FragCoord.xy/bufferSize;//
	float depth = texture2D(depthTex, texCoord).x;		
	
	vec3 screencoord; 
	vec4 normal = texture2D(normalTex, texCoord);
	vec3 n = normal.xyz*2.0 -1.0;
	
	screencoord = vec3(((gl_FragCoord.x/bufferSize.x)-0.5) * 2.0,((-gl_FragCoord.y/bufferSize.y)+0.5) * 2.0 / (bufferSize.x/bufferSize.y),-DepthToZPosition( depth ));
	screencoord.x *= -screencoord.z; 
	screencoord.y *= screencoord.z;
	
	gl_FragColor = vec4(lighting(lightColor,lightPos,lightRange,screencoord,n,vec3(1,1,1),vec3(1,1,1),normal.w),1);
	gl_FragColor *= texture2D(colorTex, texCoord);
}

Problem is that I can’t figure out how to do the next step with the depth comparison.
Could anyone be so nice to help me?

What are you stumped on. Do you mean you don’t know how to use the GPU’s built-in shadow map compare hardware to do the depth comparison? If so:

[ol][li] create a depth texture (DEPTH_COMPONENT or DEPTH_STENCIL),[] enable depth comparisons (TEXTURE_COMPARE_MODE == COMPARE_R_TO_TEXTURE), and[] bind this texture to a sampler*Shadow uniform in the shader,[/ol][/li]then texture accesses to this sampler in the shader will perform a depth comparison using special hardware in the GPU, and return the result of the comparison (instead of the depth value).

On NVidia hardware, setting the MIN/MAG filters to GL_NEAREST will result in one shadow map sample per pixel, whereas GL_LINEAR will result in multiple shadow map samples per pixel, with the results averaged.

I also have the projection matrix(lightmatrix*viewmatrix)

This looks fishy. You want to go from eye-space to light-space clip. To do that you need:

  1. inverse eye viewing transform[] light-space viewing transform[] light-space projection transform

This looks fishy. You want to go from eye-space to light-space clip. To do that you need:
inverse eye viewing transform
light-space viewing transform
light-space projection transform

So does this mean I have to multiply the light projection matrix with the light modelview matrix and then the inverse of the camera modelview matrix?

What are you stumped on. Do you mean you don’t know how to use the GPU’s built-in shadow map compare hardware to do the depth comparison? If so:
create a depth texture (DEPTH_COMPONENT or DEPTH_STENCIL),
enable depth comparisons (TEXTURE_COMPARE_MODE == COMPARE_R_TO_TEXTURE), and
bind this texture to a sampler*Shadow uniform in the shader,

Well I already created a depth texture and rendered the depth using the light projection and the light modelview matrix.
Still I see no reason to “enable depth comparisons (TEXTURE_COMPARE_MODE == COMPARE_R_TO_TEXTURE)”
I mean how does this affect my shader?
I just want to know how to compare the 2 depths in the shader.
In the fragment shader of the spot light I have SPos which is the position in the camera-space of the fragment. I imagine it’s Z component should be the depth. How do I calculate the depth that I have to compare it to?
I mean I know I have to use use shadow2D or something similar but just can’t get it. And there was a tutorial at ziggyware but the website is now dead and I can’t even find it’s archive on www.archive.org

Yep, but not the light modelview, just the light viewing transform (we want world-to-lt-eye, not object-to-lt-eye).

And also, just to be perfectly clear, the order you listed the matrices is the reverse of the application order relative to the vector. You want matrix (operator) application order to be the following:

[ol][li] camera inverse viewing transform (M1) – takes you from camera eye to world[] light-space viewing transform (M2) – takes you from world to light eye[] light-space projection transform (M3) – takes you from light eye to light clip[/ol][/li]Since OpenGL follows the column-major operator-on-the-left convention, that means in the OpenGL notation that you want: (M3M2M1)v1 = v2, where v1 is a camera eye-space vector and v2 is a light clip-space vector. So the matrix you want to pass into your shader is M=M3M2*M1.

But I think you were compensating for this apparent order reversal due to operator order convention in your reply, which is why I said you had it right, except for the modelview thing.

Still I see no reason to “enable depth comparisons (TEXTURE_COMPARE_MODE == COMPARE_R_TO_TEXTURE)”
I mean how does this affect my shader?

It determines whether you use built-in hardware on the GPU to do the depth comparison “outside” your shader (and optionally multiple lookups with filtering “outside” your shader – aka PCF), OR you get have to fetch raw depth values in your shader and do the comparisons/filtering yourself.

The former is faster and sufficient if all you need is single binary depth compare or basic PCF shadow map lookups. The reason is that (on NVidia hardware at least) there is dedicated logic on the GPU to do these depth comparisons (and filtering) if you want it.

The way it affects your shader is that if you enable depth comparisons, the result you get back from your texture lookup is the result of the depth comparison, NOT the raw depth value from the depth texture.

If you “do” want hardware depth comparisons, use a Shadow sampler (e.g. sampler2DShadow), enable depth comparisons on the texture, and use a shadow texture sampling function in your shader (if using GLSL 1.2 or earlier, else just use texture*).

If you “do not” want hardware depth comparisons, use a non-Shadow sampler (e.g. sampler2D), disable hardware depth comparisons, and use a non-shadow texture sampling function (again if GLSL 1.2 or earlier; otherwise just use texture*).

Note that you can use a depth texture in either case. And for the latter case, you can use pretty much any other texture format you want as well.

I just want to know how to compare the 2 depths in the shader.
In the fragment shader of the spot light I have SPos which is the position in the camera-space of the fragment. I imagine it’s Z component should be the depth. How do I calculate the depth that I have to compare it to?

Ok, so you want to do your own depth comparisons. So use a sampler2D, not a sampler2DShadow. Also, don’t enable depth comparisons on that texture. Then, when you do a texture lookup, you’ll get the light clip-space depth value associated with that position.

Now you need to get your fragment position in light clip-space in the fragment shader so you can do that texture lookup and depth comparison. There are lots of ways to do that. One is to pass in a varying from the vertex shader which is your light clip-space vertex position interpolated across the polygon. To get it, in the vertex shader, you first compute the vertex position in camera eye-space (gl_ModelViewMatrix * gl_Vertex). Then you multiply it by the “M” we discussed above which you passed in (i.e. M=M3M2M1) to transform that camera eye-space position to a light clip-space position. Then you let the GPU interpolate that position across the polygon.

I mean I know I have to use use shadow2D or something similar but just can’t get it.

No, if you truly want to manually do your own depth comparison in your fragment shader (e.g. mydepth < shadowmapdepth test, or something else more slick like VSMs), then you don’t use shadow2D. You’d use texture2D (or more likely texture2Dproj, since you’re doing a shadow map from a positional light source, which uses a perspective projection, and thus requires a perspective divide).

Note that only prior to GLSL 1.3 were there separate functions for depth-compare vs. non-depth-compare shadow lookups (e.g. shadow2D vs. texture2D). In GLSL 1.3, they realized that this and other things were causing a needless explosion in the number of texture sampling function names, so they removed all the typing stuff out of the names, and both of the above mapped to a simply-named “texture” function.

So for instance shadow2DProj and texture2DProj both mapped to textureProj in GLSL 1.3.

And there was a tutorial at ziggyware but the website is now dead and I can’t even find it’s archive on www.archive.org

Try these. They aren’t perfect, but they’re a good place to start, and they do use GLSL:

The latest Orange Book also has some good stuff too.

Hey thanks a lot for your answer.
I think I’m starting to get the hang of it but there is still something weird.
First of all I tried what you said with hte 3matrix (M=M3M2M1) and here is what I got:
http://img42.imageshack.us/img42/8880/3matrix.jpg

I used this code to calculate the matrix:


TMatrix projm=light->GetProjMatrix();
projm->Multiply(light->GetMatrix());
projm->Multiply(CurrentCamera->GetMatrix()->Inverse());

Here is my frag shader:


float far = gl_LightSource[0].linearAttenuation;
float near = 0.1;
float a = far / ( far - near );
float b = far * near / ( near - far );
		
float shadowMapResolution = gl_LightSource[0].quadraticAttenuation;
float psx = 1.0 / (shadowMapResolution * 4.0);
float psy = 1.0 / (shadowMapResolution * 2.0);
		
smcoord=(vec4(p-SPos,1)*ProjectionMatrix).xyz;
smcoord.x /= -smcoord.z/0.5;
smcoord.y /= -smcoord.z/0.5;
smcoord.x += 0.5;
smcoord.y += 0.5;
smcoord.z = a + b / smcoord.z;
		
float shadowcolor = shadow2D(ShadowMap,smcoord).x;

I took this from leadwerks.

The second try is this:
http://img689.imageshack.us/img689/7653/2matrix.jpg
Here is the code I used to calculate the proj matrix


TMatrix projm =CurrentCamera->GetMatrix()->Inverse();
projm->Multiply(light->GetMatrix());

the frag shader remained the same.
As you can see it’s almost right BUT it has some tiling of the shadow somehow. I guess that’s because I didn’t use the light projection matrix tho when I try to use it (no matter how I try multiplying it goes looking like the first screenshot.

I know you said that the light clip-space vector should be like this “(M3M2M1)*v1 = v2” but this is the only way I got some results.
You seem like a nice guy for trying to help me here. If you want I can send you the full source code for this and even pay you for your work.
I’ll also send you a PM :slight_smile:

Strange. Definitely appears to be some coordinate transform error going on here. Further, it appears your shadow map is repeating when you’re rendering with it. To the latter, assuming that the lower-right minified view is a rendering of the contents of your shadow map, my assumption is that your shadow map is probably correct, that you’ve left the wrap modes on the shadow map texture to be GL_REPEAT, and -to-light-clip-space coordinate transform is wrong. To fix the wrap mode to be more physically correct, you want to change them to something like GL_CLAMP_TO_EDGE. In other words:

glTexParameteri( gl_target, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE );
glTexParameteri( gl_target, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE );

That’ll get rid of the repeating copies when your coordinates are outside the shadow map region. In the real world, shadow maps don’t repeat, so GL_REPEAT for shadow maps is nonsensical. GL_CLAMP_TO_EDGE is closer to reality.

(Note: this paragraph is a nuance to leave until later once you fix your coordinate transforms:) However, note that if you have occluders that are strattling the edge of your shadow map and your lighting is such that you can see those edge shadows cast by the edge texels of the shadow map (e.g. low SPOT_EXPONENT, so you can see spotlight diffuse/specular at the edge of your light cone/shadow map, and you tight-fit your shadow map to the light cone), then CLAMP_TO_EDGE may not be what you need because the edge pixels of the shadow map (which we’ll assume could contain casters) will effectively be stretched outside the bounds of the shadow map in shadow map space X & Y. So in practice you need to “clip-out” out-of-frustum coordinates (outside [0,1]) in your shader and treat them as unshadowed, OR bloat your shadow map frustum so that it’s a little bigger than the light cone, OR (simpler) just use CLAMP_TO_BORDER and set the border color to 1,1,1,1 (only the first component is relevant for depth textures). This should ensure nothing outside the shadow map in light-space XY is shadowed (1.0 being the light-space far-clip value for normalized textures like depth textures).

I used this code to calculate the matrix:


TMatrix projm=light->GetProjMatrix();
projm->Multiply(light->GetMatrix());
projm->Multiply(CurrentCamera->GetMatrix()->Inverse());

Ok, you need to find out whether this TMatrix uses row-major or column-major storage order. If row-major (probably, since this is C/C++), then you’ll want to flop this multiply order. Reason is that row-major operator-on-the-right is equivalent to OpenGL’s column-major operator-on-the-left (what OpenGL uses). More here

The second try is this:
http://img689.imageshack.us/img689/7653/2matrix.jpg
Here is the code I used to calculate the proj matrix


TMatrix projm =CurrentCamera->GetMatrix()->Inverse();
projm->Multiply(light->GetMatrix());

This is partially trying the order flip I was alluding to. But where did your light projection matrix go? Need to tack that on the end as well. That is, light->GetProjMatrix().

I know you said that the light clip-space vector should be like this “(M3M2M1)*v1 = v2” but this is the only way I got some results.

This strongly suggests your TMatrix class is using row-major storage order, which is what you’d typically use in C/C++ anyway. This means you need to use row-major operator-on-the-right convention (v1M1M2*M3=v2) when using it to build matrices that are compatible with OpenGL without requiring a transpose. This is what you were trying to do in that most recent code snippet. Just get that light projection matrix tacked onto the end.

Here is my frag shader:


...shadow2D(ShadowMap,smcoord).x;

I haven’t traced your math, but it surprises me to 1) see all this manual math in here – a single matrix multiply (M) applied to your camera eye-space position should be all that’s required, and 2) that you’re not using shadow2DProj (you “can” do your own perspective divide, but why?) As to 1), I’d get it working the simple way first. Then get super-slick with all this complication if you think there’s a benefit. The simple way doesn’t care about screen resolution, shadow map resolution, gl_FragCoord, all this obtuse math, etcetc. It’s just: Matrix transform, shadow map lookup, done. Really, it’s that simple! :cool:

If you want I can send you the full source code for this and even pay you for your work.
I’ll also send you a PM :slight_smile:

Well, thanks but I think you’ve just about got it. Would hate to deprive you of the “ah hah!” moment, which I think will happen very soon. :wink: If not, I can look at your source.

Hey :slight_smile:
I’m back. It still doesn’t work but I tried to clean up my code a bit for you to understand better what I’m doing. :slight_smile:

First of all here some info about TMatrix.
The multiplication code looks like this.


void CMatrix::Multiply(TMatrix mat)
{
	float m00 = grid[0][0]*mat->grid[0][0] + grid[1][0]*mat->grid[0][1] + grid[2][0]*mat->grid[0][2] + grid[3][0]*mat->grid[0][3];
	float m01 = grid[0][1]*mat->grid[0][0] + grid[1][1]*mat->grid[0][1] + grid[2][1]*mat->grid[0][2] + grid[3][1]*mat->grid[0][3];
	float m02 = grid[0][2]*mat->grid[0][0] + grid[1][2]*mat->grid[0][1] + grid[2][2]*mat->grid[0][2] + grid[3][2]*mat->grid[0][3];
	float m10 = grid[0][0]*mat->grid[1][0] + grid[1][0]*mat->grid[1][1] + grid[2][0]*mat->grid[1][2] + grid[3][0]*mat->grid[1][3];
	float m11 = grid[0][1]*mat->grid[1][0] + grid[1][1]*mat->grid[1][1] + grid[2][1]*mat->grid[1][2] + grid[3][1]*mat->grid[1][3];
	float m12 = grid[0][2]*mat->grid[1][0] + grid[1][2]*mat->grid[1][1] + grid[2][2]*mat->grid[1][2] + grid[3][2]*mat->grid[1][3];
	float m20 = grid[0][0]*mat->grid[2][0] + grid[1][0]*mat->grid[2][1] + grid[2][0]*mat->grid[2][2] + grid[3][0]*mat->grid[2][3];
	float m21 = grid[0][1]*mat->grid[2][0] + grid[1][1]*mat->grid[2][1] + grid[2][1]*mat->grid[2][2] + grid[3][1]*mat->grid[2][3];
	float m22 = grid[0][2]*mat->grid[2][0] + grid[1][2]*mat->grid[2][1] + grid[2][2]*mat->grid[2][2] + grid[3][2]*mat->grid[2][3];
	float m30 = grid[0][0]*mat->grid[3][0] + grid[1][0]*mat->grid[3][1] + grid[2][0]*mat->grid[3][2] + grid[3][0]*mat->grid[3][3];
	float m31 = grid[0][1]*mat->grid[3][0] + grid[1][1]*mat->grid[3][1] + grid[2][1]*mat->grid[3][2] + grid[3][1]*mat->grid[3][3];
	float m32 = grid[0][2]*mat->grid[3][0] + grid[1][2]*mat->grid[3][1] + grid[2][2]*mat->grid[3][2] + grid[3][2]*mat->grid[3][3];

	grid[0][0]=m00;
	grid[0][1]=m01;
	grid[0][2]=m02;
	grid[1][0]=m10;
	grid[1][1]=m11;
	grid[1][2]=m12;
	grid[2][0]=m20;
	grid[2][1]=m21;
	grid[2][2]=m22;
	grid[3][0]=m30;
	grid[3][1]=m31;
	grid[3][2]=m32;
}

I’m sure it must be compatible with OpenGL because for example when I draw a mesh I use “glMultMatrixf(*GetMatrix()->grid);” to modify the modelview matrix and it works just fine.

I cleared up the code a bit for the projection matrix calculation. Now I have these two ways that I can switch to test.


/* 1
TMatrix projm =CurrentCamera->GetMatrix()->Inverse();
projm->Multiply(light->GetMatrix());
projm->Multiply(light->GetProjMatrix());
					*/
TMatrix projm=light->GetProjMatrix();
projm->Multiply(light->GetMatrix());
projm->Multiply(CurrentCamera->GetMatrix()->Inverse());

For the light projection matrix (GetProjMatrix()) I use this:


glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
gluPerspective(45,1,1,10000);
TMatrix proj = new CMatrix();
glGetFloatv(GL_PROJECTION_MATRIX, *proj);
glPopMatrix();
return proj;

This should be pretty straight forward so no need for explanations.

Now what I really cleaned is the shader code. It makes sense for me but apparently it’s not good enough.
I’ll only post the way I get the depth from the shadowmap now:


float DepthToZPosition(in float depth) 
{
	return near / (far - depth * (far - near)) * far;
}



vec2 texCoord = gl_FragCoord.xy/bufferSize;
float depth = texture2D(depthTex, texCoord).x;
vec3 screencoord; 
screencoord = vec3(((gl_FragCoord.x/bufferSize.x)-0.5) * 2.0,((-gl_FragCoord.y/bufferSize.y)+0.5) * 2.0 / (bufferSize.x/bufferSize.y),-DepthToZPosition( depth ));
screencoord.x *= -screencoord.z; 
screencoord.y *= screencoord.z;

float shadowdepth = shadow2DProj(ShadowMap,ProjectionMatrix*vec4(screencoord,1)).x;

if (shadowdepth< texture2D(depthTex, texCoord).x)
{
	shadowcolor=1.0;
}
else
{
	shadowcolor=0.0;
}

Shadowcolor is later multiplied with the output color.

Now when I run it… if I’m far away I can see the ball rendered well. If I get closer it get’s a “shadowed” circle inside that get’s larger till it fills the whole sphere.
Also there seems to be a sqare like border for the shadow(I added that thing with the border).
Here is a pic:
http://img695.imageshack.us/img695/2771/shadowmap.jpg

I know it’s not as close as it was before but at least this is code that I understand :).

I feel I’m very close to solving it but I’m not sure about that whole screen coord thing thing. Other than that I think my code is ok right?

I managed to do it :slight_smile:
I never actually go to use the light projection and it still works just fine.
Last problem I have is with this

(Note: this paragraph is a nuance to leave until later once you fix your coordinate transforms:) However, note that if you have occluders that are strattling the edge of your shadow map and your lighting is such that you can see those edge shadows cast by the edge texels of the shadow map (e.g. low SPOT_EXPONENT, so you can see spotlight diffuse/specular at the edge of your light cone/shadow map, and you tight-fit your shadow map to the light cone), then CLAMP_TO_EDGE may not be what you need because the edge pixels of the shadow map (which we’ll assume could contain casters) will effectively be stretched outside the bounds of the shadow map in shadow map space X & Y. So in practice you need to “clip-out” out-of-frustum coordinates (outside [0,1]) in your shader and treat them as unshadowed, OR bloat your shadow map frustum so that it’s a little bigger than the light cone, OR (simpler) just use CLAMP_TO_BORDER and set the border color to 1,1,1,1 (only the first component is relevant for depth textures). This should ensure nothing outside the shadow map in light-space XY is shadowed (1.0 being the light-space far-clip value for normalized textures like depth textures).

The solution with setting the border to (1,1,1,1) seems a bit hardcoded so I’m going to try to find a nice alternative.
Thanks a lot man. After I figure everything out I’ll post the final code so you can see how I did it.

Hey, that’s great. Glad you licked it.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.