PDA

View Full Version : cascaded shadow map depth bias with deferred rendering.



zippoL
06-15-2016, 09:32 PM
Hey!

I'm using deferred rendering and my G-buffer contains only the normal map normals, i.e. not the geometry normals.
According to this homepage: www.opengl-tutorial.org/intermediate-tutorials/tutorial-16-shadow-mapping/
I can calculate the depth bias with:

float bias = 0.005*tan(acos(cosTheta)); // cosTheta is dot( n,l ), clamped between 0 and 1
bias = clamp(bias, 0,0.01);

Which is a problem in my case, because I don't have the geometry normals stored in my G-buffer.
I found another thread with a similiar issue here: www.gamedev.stackexchange.com/questions/66970/slope-scaled-depth-bias-with-normal-maps
if I understand him correctly then the solution would be(when rendering the shadow map):



float bias = 0.005*tan(acos(geometryNormal, -lightDir));
bias = clamp(bias, 0,0.01);
gl_FragDepth = gl_FragCoord.z - bias; //Write depth to shadow map depth texture.

Is this correct? With this method, should I sample the shadow map depth texture like this(see below)?


shadow2DArray(shadowDepthTexture, vec4(projCoords.xyz, lightDepth)).r;

This is how I setup the depth comparison texture parameters:



glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_COMPARE_MODE, GL_COMPARE_REF_TO_TEXTURE);
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_COMPARE_FUNC, GL_LEQUAL);

In this case, I must change the current comparison function, i.e. GL_LEQUAL.


If I had the geometry normals to calculate the depth bias then the sampling procedure would be like this:


shadow2DArray(shadowDepthTexture, vec4(projCoords.xyz, lightDepth - bias)).r;