Yattabyte

10-07-2015, 01:17 PM

Hello there. I've been trying to switch over from standard shadow mapping to cascaded shadow mapping, and although I've made pretty good progress, I've reached a bit of a snag.

I've been scavenging the internet for over a week now for resources on cascaded shadow maps, so my current implementation is kind of jury rigged together, but it at the very least produces shadows, so I'm part way there.

My issue:

My issue is a math issue I think. I can't quite get the projection right for rendering into a given shadow split / cascade level.

My camera's rotation doesn't properly shift where the light's CSM's should be focused on, and further, when the camera does rotate the entire shadow map tends to squish towards the center.

See i.imgur.com/JRajdIb.gif

What does happen:

I'm fairly certain that my calculation for getting the split distances is correct. I've debugged it more than several times and triple checked the math myself by doing it by hand and getting the same expected results.

Also, if I move an object between the split regions, the shadow does decay in quality, as expected. I can see this firsthand by colorizing the regions that lay within any given cascade of the shadow map.

My scene is simple. It contains only a few basic objects and a single directional light to cast shadows, so there is nothing else to really interfere with my results.

Lastly, I do have my shadow textures set up in a texture array, and sampling from any given one works correctly.

When I tell my directional light to create a shadow, I perform the following (worry not about inefficiencies at this point):

float lambda = 0.5; // Lambda value for split distance calc.

float n = 1.0f; // Near plane

float f = 100000.0f; // Far plane

float m = 6; // 6 split intervals

float Ci[7]; // Split distances stored here

Ci[0] = n; // Base split = near plane

// 6 levels of shadows

for (int x = 0; x < 6; x++)

{

// Calculate the split distance

float cuni = n+((f-n)*((x+1)/m));

float clog = n*powf(f/n, (x+1)/m);

float c = lambda*cuni + (1-lambda)*clog;

Ci[x+1] = c;

QMatrix4x4 cameraModelMatrix = camera->getModelMatrix();

float frustumHeight = 2.0 * Ci[x+1] * tanf((90.0f * 0.5 * M_PI)/180);

float frustumWidth = frustumHeight * (camerasize.width()/camerasize.height());

// Corners of the frustum

QVector3D corners[8];

corners[0] = QVector3D(-(frustumWidth/2), -(frustumHeight/2), Ci[0]), corners[1] = QVector3D((frustumWidth/2), -(frustumHeight/2), Ci[0]), corners[2] = QVector3D((frustumWidth/2), (frustumHeight/2), Ci[0]), corners[3] = QVector3D(-(frustumWidth/2), (frustumHeight/2), Ci[0]),

corners[4] = QVector3D(-(frustumWidth/2), -(frustumHeight/2), Ci[x+1]), corners[5] = QVector3D((frustumWidth/2), -(frustumHeight/2), Ci[x+1]), corners[6] = QVector3D((frustumWidth/2), (frustumHeight/2), Ci[x+1]), corners[7] = QVector3D(-(frustumWidth/2), (frustumHeight/2), Ci[x+1]);

// Transform corner vectors by the camera's view/model matrix

for (int z = 0; z < 8; z++)

corners[z] = cameraModelMatrix*corners[z];

// Calculate bounding box

QVector3D min(INFINITY,INFINITY,INFINITY), max(-INFINITY,-INFINITY,-INFINITY);

for (int z = 0; z < 8; z++)

{

if (min.x() > corners[z].x())

min.setX( corners[z].x() );

if (min.y() > corners[z].y())

min.setY( corners[z].y() );

if (min.z() > corners[z].z())

min.setZ( corners[z].z() );

if (max.x() < corners[z].x())

max.setX( corners[z].x() );

if (max.y() < corners[z].y())

max.setY( corners[z].y() );

if (max.z() < corners[z].z())

max.setZ( corners[z].z() );

}

// Create Crop Matrix

float scaleX, scaleY, scaleZ;

float offsetX, offsetY, offsetZ;

scaleX = 2.0f / (max.x() - min.x());

scaleY = 2.0f / (max.y() - min.y());

offsetX = -0.5f * (max.x() + min.x()) * scaleX;

offsetY = -0.5f * (max.y() + min.y()) * scaleY;

scaleZ = 1.0f / (max.z() - min.z());

offsetZ = -min.z() * scaleZ;

QMatrix4x4 crop( scaleX, 0.0f, 0.0f, offsetX,

0.0f, scaleY, 0.0f, offsetY,

0.0f, 0.0f, scaleZ, offsetZ,

0.0f, 0.0f, 0.0f, 1.0f);

QMatrix4x4 projection = QMatrix4x4::ortho(-1, 1, -1, 1, -1, 1);

crop = projection * crop;

/*...SEND VALUES TO SHADER...*/

/*..RENDER SCENE INTO THIS SHADOW TEXTURE...*/

}

In my fragment shader, to determine my light space position of a fragment, I do:

vec4 LightSpacePos = LightCrop[i] * LightViewMatrix * WorldPosition;

And the rest really isn't needed, as the shadows work, and so does displaying the regions which fall under each texture.

Although I understand the underlying basics of how this should work, the specific implementation of it is what I'm a little bit stumped on. I've seen the NVidia slides on it, the GPU Gems article, as well as over a dozen other websites.

Can anyone explain to me how I should be correctly determining the minimum and maximum values required to properly create the crop matrix?

I will gladly provide any further information if anyone wants to help me.

I've been scavenging the internet for over a week now for resources on cascaded shadow maps, so my current implementation is kind of jury rigged together, but it at the very least produces shadows, so I'm part way there.

My issue:

My issue is a math issue I think. I can't quite get the projection right for rendering into a given shadow split / cascade level.

My camera's rotation doesn't properly shift where the light's CSM's should be focused on, and further, when the camera does rotate the entire shadow map tends to squish towards the center.

See i.imgur.com/JRajdIb.gif

What does happen:

I'm fairly certain that my calculation for getting the split distances is correct. I've debugged it more than several times and triple checked the math myself by doing it by hand and getting the same expected results.

Also, if I move an object between the split regions, the shadow does decay in quality, as expected. I can see this firsthand by colorizing the regions that lay within any given cascade of the shadow map.

My scene is simple. It contains only a few basic objects and a single directional light to cast shadows, so there is nothing else to really interfere with my results.

Lastly, I do have my shadow textures set up in a texture array, and sampling from any given one works correctly.

When I tell my directional light to create a shadow, I perform the following (worry not about inefficiencies at this point):

float lambda = 0.5; // Lambda value for split distance calc.

float n = 1.0f; // Near plane

float f = 100000.0f; // Far plane

float m = 6; // 6 split intervals

float Ci[7]; // Split distances stored here

Ci[0] = n; // Base split = near plane

// 6 levels of shadows

for (int x = 0; x < 6; x++)

{

// Calculate the split distance

float cuni = n+((f-n)*((x+1)/m));

float clog = n*powf(f/n, (x+1)/m);

float c = lambda*cuni + (1-lambda)*clog;

Ci[x+1] = c;

QMatrix4x4 cameraModelMatrix = camera->getModelMatrix();

float frustumHeight = 2.0 * Ci[x+1] * tanf((90.0f * 0.5 * M_PI)/180);

float frustumWidth = frustumHeight * (camerasize.width()/camerasize.height());

// Corners of the frustum

QVector3D corners[8];

corners[0] = QVector3D(-(frustumWidth/2), -(frustumHeight/2), Ci[0]), corners[1] = QVector3D((frustumWidth/2), -(frustumHeight/2), Ci[0]), corners[2] = QVector3D((frustumWidth/2), (frustumHeight/2), Ci[0]), corners[3] = QVector3D(-(frustumWidth/2), (frustumHeight/2), Ci[0]),

corners[4] = QVector3D(-(frustumWidth/2), -(frustumHeight/2), Ci[x+1]), corners[5] = QVector3D((frustumWidth/2), -(frustumHeight/2), Ci[x+1]), corners[6] = QVector3D((frustumWidth/2), (frustumHeight/2), Ci[x+1]), corners[7] = QVector3D(-(frustumWidth/2), (frustumHeight/2), Ci[x+1]);

// Transform corner vectors by the camera's view/model matrix

for (int z = 0; z < 8; z++)

corners[z] = cameraModelMatrix*corners[z];

// Calculate bounding box

QVector3D min(INFINITY,INFINITY,INFINITY), max(-INFINITY,-INFINITY,-INFINITY);

for (int z = 0; z < 8; z++)

{

if (min.x() > corners[z].x())

min.setX( corners[z].x() );

if (min.y() > corners[z].y())

min.setY( corners[z].y() );

if (min.z() > corners[z].z())

min.setZ( corners[z].z() );

if (max.x() < corners[z].x())

max.setX( corners[z].x() );

if (max.y() < corners[z].y())

max.setY( corners[z].y() );

if (max.z() < corners[z].z())

max.setZ( corners[z].z() );

}

// Create Crop Matrix

float scaleX, scaleY, scaleZ;

float offsetX, offsetY, offsetZ;

scaleX = 2.0f / (max.x() - min.x());

scaleY = 2.0f / (max.y() - min.y());

offsetX = -0.5f * (max.x() + min.x()) * scaleX;

offsetY = -0.5f * (max.y() + min.y()) * scaleY;

scaleZ = 1.0f / (max.z() - min.z());

offsetZ = -min.z() * scaleZ;

QMatrix4x4 crop( scaleX, 0.0f, 0.0f, offsetX,

0.0f, scaleY, 0.0f, offsetY,

0.0f, 0.0f, scaleZ, offsetZ,

0.0f, 0.0f, 0.0f, 1.0f);

QMatrix4x4 projection = QMatrix4x4::ortho(-1, 1, -1, 1, -1, 1);

crop = projection * crop;

/*...SEND VALUES TO SHADER...*/

/*..RENDER SCENE INTO THIS SHADOW TEXTURE...*/

}

In my fragment shader, to determine my light space position of a fragment, I do:

vec4 LightSpacePos = LightCrop[i] * LightViewMatrix * WorldPosition;

And the rest really isn't needed, as the shadows work, and so does displaying the regions which fall under each texture.

Although I understand the underlying basics of how this should work, the specific implementation of it is what I'm a little bit stumped on. I've seen the NVidia slides on it, the GPU Gems article, as well as over a dozen other websites.

Can anyone explain to me how I should be correctly determining the minimum and maximum values required to properly create the crop matrix?

I will gladly provide any further information if anyone wants to help me.