Shadow Volumes with Animated Meshes

I’ve coded and successfully rendered shadow volumes with static meshes. I just recently implemented my own bone based model format where each vertex is attached to at most 1 bone. Each frame I transform the vertex positions and normals in a vertex program using bone matricies that I pass in. My question is how would I find which triangles face my light source for these animated meshes. Assuming that a triangle can have 3 verts transformed by 3 different bone matricies. I also have the original triangle normals at my disposal incase that matters. Assuming this needs to be done on cpu.

The silhouette edge information is only calculated once, when animating you only modify the vertices, right? So the indices stay the same, so you shouldn’t really have to do anything… Unless you use a vertex program or something to do the animation of course.

I already have the edges precomputed, I mean when I decide WHICH edges to extrude, i check to see which triangles face the light source(light dot triangle normal), my problem is that when i modify the vert positions in the vertex program, the triangle normal is changed.

Originally posted by AMahajan:
I already have the edges precomputed, I mean when I decide WHICH edges to extrude, i check to see which triangles face the light source(light dot triangle normal), my problem is that when i modify the vert positions in the vertex program, the triangle normal is changed.

You need to rotate the normals using the bone matrix before doing your N.L test.

Originally posted by chrisATI:

You need to rotate the normals using the bone matrix before doing your N.L test.

I don’t think that is entirely correct. If you transform a vertex by a matrix then you need to transform the normal by the inverse transpose of the matrix.

If you only have the 3 vertices for the triangles one the transform is complete, then you need to recompute the normal of the triangle.

This can be accomplished by doing a cross-product of two edges of the triangle. Example:

[i]
a triangles with vertices A, B, C:

vector = CrossProduct( (B-A), (C-A) );

//
// Make sure to normalize it!
//
normal = Normalize(vector);
[/i]

If your triangles is clockwise wound from the front, this should produce the desired normal. If your triangles is clockwise wound from the back, then your normal will be reversed. You can simply, negate the components of the new vector to flip it, or reverse the input vectors to your cross-product.

Also, note to properly test for light in front or back you need to make a plane equation from the triangle, which is a normal AND and distance. The distance can be found by taking the triangles new normal, and doing a dot product with one of it’s vertices.

Hope that helps.

An inverse transpose per mesh and 3 dot products per vertex should be cheaper than a cross-product and normalize for each vertex.

[This message has been edited by PK (edited 02-20-2003).]

Originally posted by PK:
[b]An inverse transpose per mesh and 3 dot products per vertex should be cheaper than a cross-product and normalize for each vertex.

[This message has been edited by PK (edited 02-20-2003).][/b]

Yes it would be cheaper, if you need per vertex information. However, if you re-read my post more carefully you will note that we are generating a per triangle plane, so we can test front/back conditions with a light source.

Performing a per-vertex light test would be useless for shadow-mesh edge determination.

PK,

For a general transform-to-normal transform, you need the inverse transpose.

However, if you use no non-uniform scale, and dont’ apply the transform part of the matrix to the normal, then you can run the normal through the regular matrix. That’s why the suggestion was to ROTATE the normal by the transform matrix for the bone.

That’s why you sometimes see vertices specified with a w of 1 (at that point meaning “full translation”) and normals with a w of 0 (meaning “no translation”) – that way, the math just magically comes out right.

Again, assuming no non-uniform scale.

If you use non-uniform scale, you need one inverse transpose per bone matrix, and your cache will run a little warmer because you might have two matrices where previously you had one.

Last, I agree: If you want to extrude 100% correct, you end up having to re-derive the face normal on the CPU.

Thanks for all of the suggestions guys, im already transforming all of the vertex normals in the vertex program. Calculating the triangle/plane normal and distance is gonna be much harder than I thought. I guess I might as well transform the verticies/normals on cpu and then generate the plane from there if I want accurate shadowing.

Originally posted by jwatte:

That’s why you sometimes see vertices specified with a w of 1 (at that point meaning “full translation”) and normals with a w of 0 (meaning “no translation”) – that way, the math just magically comes out right.

Woah! Just worked out the math. Math is cool!!!

So what this actually means is that vertex shaders are pretty useless to use when skinning with stencil shadows…

Since you have to calculate the normals to detect the edges that form the shadow volume and you need the transformed vertex positions to construct the actual shadow volume…

or?

regards

/hObbE

You can extrude away-facing vertices, which will give you a little bit of volume shrink (depending on tesselation) but may look OK.

You can insert degenerate quads between each edge, and set each triangle up with face normals, and then extrude in the shader. This may be the easiest way to get “exact” volumes, but taxes the transform part of the card much harder, especially if you use lots of bones and high-poly characters.

You can give up, and realize that there’s a reason that the CG motion picture people use shadow buffers, rather than shadow volumes :slight_smile:

Another suggestion: How about computing the normals for the faces and indexing them for each triangle and then rotating and testing against the light. I haven’t tested it though, but if it works you get to avoid the cross product and normalization :\

Cheers!

The problem with this is probably that each triangle can be distorted beyond recognition since different bones kan affect the vertices… how would you rotate the normal? With what matrix?

This is the same problem with tangent space bumpmapping on skinned meshes i guess. The tangent space can be distorted since triangles will change shape totally…

or?

/hObbE

You could try transforming your light vector into object space, calculate your shadow volume and then send both the object and your shadow volume to the vertex/skinning program?

I can see problems in cases where the shadow volume might get deformed out of existence in parts of the volume, but I guess that would depend on the bone transformations…

It might be viable for some meshes and not for others.

tobiaso: I thought you’d say that, but how about precomputing a list of triangles that are shared by multiple bones (or flag:in them) and then blending between the results, optimal for a small number of bones per vertex. But one thing should not be forgotten, what do you do when you’re skinning? Select matrices that are most likely to be multiplied and weighed with vertices in the vertex shader, this results in the fact that you don’t have any information on the exact position the vertices and doing the face normal calculation under these circumstances will result in an unaccurate normal. So the only way to do it exactly is to preprocess the vertices then run the shadow creation through the data and send the vertices through the pipeline again. So little los in accuracy will occur… should occur. The tanget vectors can be multiplied with the same matrices and blended resulting in the same space as the vertices which own them. Though normal texture warping might occur, but if you have a highly tessellated mesh with multiple bones per vertex, you should get a reasonably accurate approximation.

Cheers!

[This message has been edited by PixelDuck (edited 02-26-2003).]

So what this actually means is that vertex shaders are pretty useless to use when skinning with stencil shadows…

Yet another suggestion: Similar to the degenerate quads approach, but with even more brute force: Store all 3 vertex positions of triangle in each vertex (original vertex position + 2 clones of adjacent vertex positions in the triangle - it’s ok because vertices must be unique per triangle anyway). Then apply skinning to all 3 positions independently, and then simply recalculate the triangle normal.

The cost is about 40% larger vertex size (extra 2 positions and weights), and each original vertex position is being skinned 18 times (x6 more vertices * 3 positions in each). But this applies only to skinning of position , skinning of tangent space is not affected. So, comparing it to non-skinned shadow volume, all extra work in VP is to skin 5 vectors instead of 3.

You can give up, and realize that there’s a reason that the CG motion picture people use shadow buffers, rather than shadow volumes :slight_smile:

However even with shadow maps the silhouette of mesh can be useful for some “economic” soft shadow tricks.

rgpc:
The problem of computing the edges still remain though… since you do not have the normal of the triangles after the skinning process… this means that you will have to transform the mesh to world space on the cpu side. Calculate the triangle normals and create the shadow mesh…

not very nice when you have vertex shaders…

PixelDuck and MZ
I’m not quite sure I get your suggestions… I’m not a guru on these things…

Also I like things simple :slight_smile:

This is my main issue with vertex shaders… Making a skinned mesh with shadow volumes seem so basic nowadays and it is a major problem upsetting the whole architecure of renderers… Weren’t they supposed to make things easy…

like:
“Well, we use these nifty shaders for everything, but for the skinned meshes we do everything on the cpu side…”

or maybe it’s only me…

Whoups I’m rambling…

regards!

/hObbE

tobiaso,

Two ways to get FACE normals in a shader:

  1. create a special version of your geometry for the shadows. This version has slivers of degenerate triangles inserted between each edge. Each non-degenerate triangle is drawn from three verts that all have the same (face) normal. This causes a heavier load on the GPU (see the 3dmark thread for all the details you could wish for :slight_smile:

  2. send “the two other” vertices along as extra vertex attributes, and use a cross product. Because different uses of the same vertex would have to have different “other two” vertices, you actually need to split up your geometry to get less sharing, and in the end you end up with the same geometry overload as in 1).