VSM issues

Hey.

I’m asking for help with my implementation of variance shadow mapping. I’m using Cg to compile down to ARB shaders, and almost everything seems to be working fine, apart from some fine-tuning that is needed, but that isn’t the main problem I’m having.

I seem to be getting some kind of depth precision/far/near clipping plane problems with the shadows, because the farther I get with a light source from an object, the more it’s shadow just fades out entirely, and when I get too close to a lit surface, the entire light just begins to fade out. This is especially a problem with objects that are closer to the near clipping plane. I’m storing the shadow map in an RGBA16_ARB texture, and I know that there’s no problem with that, because the same thing happened back when I first used RGBA32F_ARB, which I downgraded from to save on memory. Anyway, I’ll just show this via pictures, they should explain the issue.

This is the shadow mapping, behaving up close:
http://filesmelt.com/dl/crossfire0006.jpg

And when I get farther away, you can see that the shadow isn’t visible:
http://filesmelt.com/dl/crossfire0007.jpg

This is the light source closer to a surface, you can see it’s already kinda faint:
http://filesmelt.com/dl/crossfire0008.jpg

And this is how it looks like up close:
http://filesmelt.com/dl/crossfire0009.jpg

The issue you see around the edges is that I need to tweak the max variance, but it isn’t what’s causing the issue. The thing behaves the same without gaussian blurring either. Here’s my Cg source for the lighting shader:

float depthtest( float depthcoord, sampler2D shadowmap, float4 coord )
{
	float4 momments = tex2Dproj(shadowmap, coord);
	momments.xy = momments.xy+momments.zw/32;

	if(depthcoord <= momments.x)
		return 1.0;

	float variance = momments.y-(momments.x*momments.x);
	variance = max(variance, 0.00005);
	
	float d = depthcoord-momments.x;
	float p_max = variance/(variance+d*d);
	return p_max;
};

void main(
float4 texcoord : TEXCOORD0,   
float3 normal : TEXCOORD2,
float3 position : TEXCOORD3,
uniform sampler2D texture : TEXUNIT0,
uniform sampler2D shadowmap : TEXUNIT1,
uniform float4 origin,
uniform float4 colorrad,
out float4 oColor : COLOR)
{
	float depth = texcoord.z/texcoord.w;
	depth = depth * 0.5 + 0.5;

	float result = depthtest(depth, shadowmap, texcoord);
	
	float rad = colorrad.w*colorrad.w;
	float3 vec = position-origin.xyz;
	float dist = dot(vec, vec);
	float attn = (dist/rad-1)*-1;

	vec = normalize(vec);
	float dotproduct = -dot(vec, normal);
	attn = attn*dotproduct;
	
	float4 texcol = tex2Dproj(texture, texcoord);
	oColor = texcol*attn*result;
}

And this is how I set up projection:

	GLdouble flsize = 1 * tan((M_PI/360) * m_pCurrentDynLight->cone_size);
	glFrustum( -flsize, flsize, -flsize, flsize, 1, 8196 );

And this is the modelview matrix:

	int bReversed = IsPitchReversed(m_pCurrentDynLight->angles[PITCH]);
	vec3_t vTarget = m_pCurrentDynLight->origin + (m_vCurSpotForward * m_pCurrentDynLight->radius);

	glMatrixMode(GL_MODELVIEW);
	MyLookAt(m_pCurrentDynLight->origin[0], m_pCurrentDynLight->origin[1], m_pCurrentDynLight->origin[2], vTarget[0], vTarget[1], vTarget[2], 0, 0, bReversed ? -1 : 1);

I’ve tried messing around with everything, but I don’t know what to do. Any help is really apprechiated.
Thanks.

MagnumOpus.

Hi,
One thing that has caught my eye is this statement in the depthTest function



momments.xy = momments.xy+momments.zw/32;

Could you tell me why you are adding the right side term to the moments? To give you an idea, I am pasting the relevant statements from my vsm implementation note that this is in OpenGL3.3 core.


vec4 moments = texture(shadowMap, uv.xy); 
float E_x2 = moments.y;
float Ex_2 = moments.x*moments.x;
float var = E_x2-Ex_2;
var = max(var, 0.00002);
float mD = dist-moments.x;
float mD_2 = mD*mD; 
float p_max = var/(var+ mD_2); 
diffuse *= max(p_max, (dist<=moments.x)?1.0:0.2);
//other stuff for specular lighting and stuff.

Thanks for your response. I use RGBA16_ARB as the internal format, so when I store the moments I have to encode them in the 4 components, and later when I use them I have to restore it. This can’t be the problem, because as I said it works just like it did without encoding when I used a 32-bit floating point texture. The encoding goes like this, I’m bringing it up from my memory, as I only have it in ASM form at the momment, I don’t have the Cg source:

float depth = position.z/position.w;
depth = depth*0.5+0.5;
float2 momments = float2(depth, depth*depth);
float4 output;
output.zw = frac(momments*32);
output.xy = momments-(output.zw/32);
oColor = output;

That’s the rough code.

Hi ManusOpus,
Oh ok that makes sense. Other than that it seems ok to me. May be it is being caused by some other thing in your pipeline (for instance blending state or other setting). Just to check, cud to revert to a basic shadowmapping technique and see if you get the same result in that as well?

I did that, actually quite a few times over the course of this. I’ve been banging my head against this wall for a while, I went through numerous revisions back to the old PCF filtered method, which worked just fine, with the same matrix setup. This so far, is the best thing I got, and apart from this depth precision/whatever issue, it seemed to be working perfectly. So I really have no clue. Almost everything is the same, except I run different shaders at rendering into the shadow map, and again a different shader when I render my projective lights. Nothing else was changed, at all.

Alright, I figured out my issues. I was transforming the depth coord in the fragment shader to the 0-1 range, even though I was already doing that. And my precision issue at distance is caused by the fact that I used used the depth value to do this comparison, which wasn’t a good idea.

Thanks for all the help on this.

Great, I am glad u eventually got it working.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.