Cascaded shadow mapping and split values

Reposting, as the last topic is bugged and cant be edited.

I’m having issues where shadows vanish at certain angles and I want to ascertain I am using the correct values to determine the split index.


CameraFrustrum CalculateCameraFrustrum(const float minDist, const float maxDist, const Vec3& cameraPosition, const Vec3& cameraDirection, Vec4& splitDistance, const Mat4& camView)
    {
        CameraFrustrum ret = { Vec4(1.0f, 1.0f, -1.0f, 1.0f), Vec4(1.0f, -1.0f, -1.0f, 1.0f), Vec4(-1.0f, -1.0f, -1.0f, 1.0f), Vec4(-1.0f, 1.0f, -1.0f, 1.0f),
                               Vec4(1.0f, -1.0f, 1.0f, 1.0f), Vec4(1.0f, 1.0f, 1.0f, 1.0f), Vec4(-1.0f, 1.0f, 1.0f, 1.0f), Vec4(-1.0f, -1.0f, 1.0f, 1.0f), };

        const Mat4 perspectiveMatrix = glm::perspective(70.0f, 1920.0f / (float)1080.0f, minDist, maxDist);
        const Mat4 invMVP = glm::inverse(perspectiveMatrix * camView);

        for (Vec4& v : ret)
        {
            v = invMVP * v;
            v /= v.w;
        }

        splitDistance= ret[4];
        splitDistance+= ret[5];
        splitDistance+= ret[6];
        splitDistance+= ret[7];
        splitDistance /= 4;

        return ret;
    }

The splitDistance.Z is what I use as each split maxdistance. It is multiplied by the main cameras view matrix before being sent to the shader (as the index determination is done in view space). Is this correct?

I don’t understand what you’re doing here and why. It doesn’t mesh with what you say you’re trying to do.

Why aren’t you just computing your split distances in eye-space, for instance:


      double d_uniform = mix( near, far, percent );  // Linear
      double d_log     = near * pow( ( far / near ), percent );  // Log
      d                = mix( d_uniform, d_log, blend_f ); // Practical split scheme

I was talking about the split distances used for the comparison in the shader; I already have the computation of split distances working.

For example, here’s the lighting fragment shader I use:

#version 420                                                                                                                   

const float DEPTH_BIAS = 0.00005;                                                                                               

layout(std140) uniform UnifDirLight                                                                                                     
{                                                                                                                               
    mat4 mVPMatrix[4];                                                                                                          
    mat4 mCamViewMatrix;    
    vec4 mSplitDistance;                                                                                                    
    vec4 mLightColor;                                                                                                           
    vec4 mLightDir;                                                                                                             
    vec4 mGamma;                                                                                                                
    vec2 mScreenSize;                                                                                                           
} UnifDirLightPass;                                                                                                             

layout (binding = 2) uniform sampler2D unifPositionTexture;                                                                     
layout (binding = 3) uniform sampler2D unifNormalTexture;                                                                       
layout (binding = 4) uniform sampler2D unifDiffuseTexture;                                                                      
layout (binding = 6) uniform sampler2DArrayShadow unifShadowTexture;                                                            

out vec4 fragColor;                                                                                                             

void main()                                                                                                                     
{                                                                                                                               
    vec2 texcoord = gl_FragCoord.xy / UnifDirLightPass.mScreenSize;                                                             

    vec3 worldPos = texture(unifPositionTexture, texcoord).xyz;                                                                 
    vec3 normal   = normalize(texture(unifNormalTexture, texcoord).xyz);                                                        
    vec3 diffuse  = texture(unifDiffuseTexture, texcoord).xyz;                                                                  

    vec4 camPos = UnifDirLightPass.mCamViewMatrix * vec4(worldPos, 1.0);                                      

    int index = 3;                                                                        
    if (camPos .z > UnifDirLightPass.mSplitDistance.x)                                                                           
       index = 0;                                                                                                              
    else if (camPos .z > UnifDirLightPass.mSplitDistance.y)                                                                      
       index = 1;                                                                                                            
    else if (camPos .z > UnifDirLightPass.mSplitDistance.z)                                                                      
       index = 2;                                                                                                              

    vec4 projCoords = UnifDirLightPass.mVPMatrix[index] * vec4(worldPos, 1.0);                                                  
    projCoords.w    = projCoords.z - DEPTH_BIAS;                                                                                
    projCoords.z    = float(index);                                                                                             
    float visibilty = texture(unifShadowTexture, projCoords);                                                                   

    float angleNormal = clamp(dot(normal, UnifDirLightPass.mLightDir.xyz), 0, 1);                                               

    fragColor = vec4(diffuse, 1.0) * visibilty * angleNormal * UnifDirLightPass.mLightColor;                                    
}

So my idea is that splitDistance.z for each splits frustrum (from 1st post) is what I use for mSplitDistance.x/y/z (depending on split). Is this the correct values to compare against?

Ok, I’ll take your word for that. The compute logic bothers me.

I’m having issues where shadows vanish at certain angles and I want to ascertain I am using the correct values to determine the split index.


int index = 3;
if (camPos .z > UnifDirLightPass.mSplitDistance.x)
index = 0;
else if (camPos .z > UnifDirLightPass.mSplitDistance.y)
index = 1;
else if (camPos .z > UnifDirLightPass.mSplitDistance.z)
index = 2;

Assuming the distances are correct, this looks reasonable. Have you printed the values to ensure that they are correct?

I know it’s confusing, I’m using the term “split distances” in two places, the first one (which I assumed you were refering to in your first post and code) is this:

void CalculateShadowmapCascades(std::array<float, gNumShadowmapCascades>& nearDistArr, std::array<float, gNumShadowmapCascades>& farDistArr, const float nearDist, const float farDist)
    {
        const float splitWeight = 0.75f;
        const float ratio = nearDist / farDist;

        nearDistArr[0] = nearDist;
        for (uint8_t index = 1; index < gNumShadowmapCascades; index++)
        {
            const float si = index / (float)gNumShadowmapCascades;

            nearDistArr[index] = splitWeight * (nearDist * powf(ratio, si)) + (1 - splitWeight) * (nearDist + (farDist - nearDist) * si);
            farDistArr[index - 1] = nearDistArr[index] * 1.005f;
        }
        farDistArr[gNumShadowmapCascades - 1] = farDist;
    }

which I borrowed from the nvidia cascaded shadow map sample, and it works fine, for example for near_z = 0.1f and far_z = 100.0f I get the splits {6, 12, 18, 100} which looks reasonable enough to me.

The second one is the uniform I send to the shader I posted, since I can’t just use the unmodified values from CalculateShadowmapCascades() directly. I reason that for each split, while constructing that splits frustrum I sample the average Z of the far corners of the frustrum like this:

splitDistance= ret[4];
splitDistance+= ret[5];
splitDistance+= ret[6];
splitDistance+= ret[7];
splitDistance /= 4;
splitDistance= lighting.mCameraViewMatrix * splitDistance;
splitDistances[cascadeIndex] = splitDistance.z;       // splitDistances is sent to the shader

And that is what I’m not sure if I’m doing correctly. The nvidia sample uses a different comparison for the splits, like this:

far_bound[i] = 0.5f*(-f[i].fard*cam_proj[10]+cam_proj[14])/f[i].fard + 0.5f;

But me doing deferred shading figured it would be much easier and less code to do the comparison in view space?

Why not? I do, for forward and deferred shading.

I reason that for each split, while constructing that splits frustrum I sample the average Z of the far corners of the frustrum like this:

splitDistance= ret[4];
splitDistance+= ret[5];
splitDistance+= ret[6];
splitDistance+= ret[7];
splitDistance /= 4;
splitDistance= lighting.mCameraViewMatrix * splitDistance;
splitDistances[cascadeIndex] = splitDistance.z;       // splitDistances is sent to the shader

Yeah, this is what’s confusing to me. First, averaging a bunch of 4D (3D actually) points doesn’t make sense to me. Second, the points you’re averaging are just the corners of the far clip in NDC. Once you average that, you get a point, not split distances. Third, you want the eye-space Z split distances (which you already computed in CalculateShadowmapCascades, but whatever…), so you really just want to pass through an inverse PROJECTION transform to get that. If you include the VIEWING transform, as you are here, you’re going to end up in world space, and that’s generally not what you want to work with in your shader. Eye space is preferred for a number of reasons (you call it an MVP here too, which it’s not).

Isn’t the resulting corners in world-space, after multiplying by inverse viewProj matrix and dividing by W?

OK, as eye-space originates from origo and looks down the -Z axis, {-6, -12, -18, -100} would work to use directly in the shader?

I store my positions in world space (for now; I will probably change it later, but the moment, I’m using world space). Dosn’t multiplying the world space by the cameras view matrix put the position in eye space, as below:

vec4 camPos = UnifDirLightPass.mCamViewMatrix * vec4(worldPos, 1.0);

If so, why would I need the inverse projection (of the main camera?) matrix?

Should be. That pertains to CameraFrustum ret. What I’m talking about is the splitDistance calc below that.

OK, as eye-space originates from origo and looks down the -Z axis, {-6, -12, -18, -100} would work to use directly in the shader?

Sure thing. Just make sure you compare an eye-space frag position.

Also (related issue you didn’t ask about), be sure that all fragments in the quad you’re rasterizing chose the same split. If you don’t, you’ll get huge texcoord derivatives, which’ll mess up aniso filtering if you have that enabled (or just disable it).

I store my positions in world space (for now; I will probably change it later, but the moment, I’m using world space). Dosn’t multiplying the world space by the cameras view matrix put the position in eye space, as below:

vec4 camPos = UnifDirLightPass.mCamViewMatrix * vec4(worldPos, 1.0);

Sure.

If so, why would I need the inverse projection (of the main camera?) matrix?

You typically wouldn’t. I was surprised by seeing this in your C++ code to rederive the splits. I’d just keep them in eye-space and pass them in. You’re doing deferred so you’re probably going to be recomputing the fragment eye-space position in the shader for lighting anyway, if you’re not storing it in the G-buffer that is.

You are right, it works just fine using the values from CalculateShadowmapCascades() without any other modification

So I guess the original problem I had didnt have to do with that.

I’ve got the cascaded shadow mapping to work at most angles / distances… for example, see below

here’s the corresponding split debug colors (r->g->b->white)

But, just looking a little further down, the shadows gets clipped:

and the debug view:

Its really strange, and at larger distances its no problem at all

http://s22.postimg.org/um3tmjwn3/image.png

Any ideas what could be wrong and causing the shadow clipping at certain distances and angles?

Here’s how I go about creating the lights crop-view-projection by the way:

    Mat4 CreateDirLightVPMatrix(const CameraFrustrum& cameraFrustrum, const Vec3& lightDir)
    {
        Mat4 lightViewMatrix = glm::lookAt(Vec3(0.0f), -glm::normalize(lightDir), Vec3(0.0f, 1.0f, 0.0f));

        Vec4 transf = lightViewMatrix * cameraFrustrum[0];
        float maxZ = transf.z, minZ = transf.z;
        float maxX = transf.x, minX = transf.x;
        float maxY = transf.y, minY = transf.y;
        for (uint32_t i = 1; i < 8; i++)
        {
            transf = lightViewMatrix * cameraFrustrum[i];

            if (transf.z > maxZ) maxZ = transf.z;
            if (transf.z < minZ) minZ = transf.z;
            if (transf.x > maxX) maxX = transf.x;
            if (transf.x < minX) minX = transf.x;
            if (transf.y > maxY) maxY = transf.y;
            if (transf.y < minY) minY = transf.y;
        }

        Mat4 viewMatrix(lightViewMatrix);
        viewMatrix[3][0] = -(minX + maxX) * 0.5f;
        viewMatrix[3][1] = -(minY + maxY) * 0.5f;
        viewMatrix[3][2] = -(minZ + maxZ) * 0.5f;
        viewMatrix[0][3] = 0.0f;
        viewMatrix[1][3] = 0.0f;
        viewMatrix[2][3] = 0.0f;
        viewMatrix[3][3] = 1.0f;

        Vec3 halfExtents((maxX - minX) * 0.5, (maxY - minY) * 0.5, (maxZ - minZ) * 0.5);

        return glm::ortho(-halfExtents.x, halfExtents.x, -halfExtents.y, halfExtents.y, halfExtents.z, -halfExtents.y) * viewMatrix;
    }

Ok, looks like the artifact is probably on a split boundary.

First, are you doing any culling to decide what objects to cast into what split shadow maps? If so, turn that off for now and render all scene objects into all splits. Does the problem go away?

In general, think about tests you can run to whack off big parts of the problem space and help you narrow down where the bug must be.

I’m not doing any object culling, but I’m not adjusting the frustrum either to fit objects within it either. I tried simply multiplying the frustrum size by two and it seemed to help abit, but the resolution gets screwed.

Is it necessary to enlarge the frustrum to fit the objects within it, and is there a good example/alghoritm to do so?

[QUOTE=TheKaiser;1259040]I’m not doing any object culling, but I’m not adjusting the frustrum either to fit objects within it either. I tried simply multiplying the frustrum size by two and it seemed to help abit, but the resolution gets screwed.

Is it necessary to enlarge the frustrum to fit the objects within it, and is there a good example/alghoritm to do so?[/QUOTE]

Just to be clear, we’re talking about the light-space frusta here used to render your shadow maps. Let’s take the different sides of those light-space frusta in-turn.

First, the “sides” of each light space split frustum (left/right/bottom/top) generally should encompass the view-space split’s bounds (in light space). Same thing with the “far clip plane” of each light space split frustum.

The “near clip plane” of the light space split frustum is a big different though. You can start with it encompassing the view-space split’s bounds (in light space) like the other clip planes. But in general this near clip plane needs to be pushed back toward the light far enough so that it includes all the potential casters between the light source and the view frustum split.

The reason for these light-space frustum bounds is so that the resulting light-space frustum encompasses all of the objects that could potentially cast a shadow onto a portion of an object within that view frustum split.

These bounds are robust, but they’re just a starting point. You may decide to adjust them to satisfy some other goals in parallel (e.g. tighten them up based on your knowledge of where the casters and receivers are to maximize shadow map resolution/precision, and/or quantize them in some way for instance to eliminate edge flickering for some objects). But the main thing is that your light-space frusta need to encompass all casters that could potentially cast a shadow into a portion of a receiver object in that view frustum split.