PDA

View Full Version : Shadow Volumes with Animated Meshes



AMahajan
02-20-2003, 11:25 AM
I've coded and successfully rendered shadow volumes with static meshes. I just recently implemented my own bone based model format where each vertex is attached to at most 1 bone. Each frame I transform the vertex positions and normals in a vertex program using bone matricies that I pass in. My question is how would I find which triangles face my light source for these animated meshes. Assuming that a triangle can have 3 verts transformed by 3 different bone matricies. I also have the original triangle normals at my disposal incase that matters. Assuming this needs to be done on cpu.

NitroGL
02-20-2003, 11:46 AM
The silhouette edge information is only calculated once, when animating you only modify the vertices, right? So the indices stay the same, so you shouldn't really have to do anything... Unless you use a vertex program or something to do the animation of course. http://www.opengl.org/discussion_boards/ubb/smile.gif

AMahajan
02-20-2003, 12:06 PM
I already have the edges precomputed, I mean when I decide WHICH edges to extrude, i check to see which triangles face the light source(light dot triangle normal), my problem is that when i modify the vert positions in the vertex program, the triangle normal is changed.

chrisATI
02-20-2003, 12:42 PM
Originally posted by AMahajan:
I already have the edges precomputed, I mean when I decide WHICH edges to extrude, i check to see which triangles face the light source(light dot triangle normal), my problem is that when i modify the vert positions in the vertex program, the triangle normal is changed.


You need to rotate the normals using the bone matrix before doing your N.L test.

pkaler
02-20-2003, 01:59 PM
Originally posted by chrisATI:

You need to rotate the normals using the bone matrix before doing your N.L test.


I don't think that is entirely correct. If you transform a vertex by a matrix then you need to transform the normal by the inverse transpose of the matrix.

JoeR
02-20-2003, 03:19 PM
If you only have the 3 vertices for the triangles one the transform is complete, then you need to recompute the normal of the triangle.

This can be accomplished by doing a cross-product of two edges of the triangle. Example:


a triangles with vertices A, B, C:

vector = CrossProduct( (B-A), (C-A) );

//
// Make sure to normalize it!
//
normal = Normalize(vector);


If your triangles is clockwise wound from the front, this should produce the desired normal. If your triangles is clockwise wound from the back, then your normal will be reversed. You can simply, negate the components of the new vector to flip it, or reverse the input vectors to your cross-product.

Also, note to properly test for light in front or back you need to make a plane equation from the triangle, which is a normal AND and distance. The distance can be found by taking the triangles new normal, and doing a dot product with one of it's vertices.

Hope that helps.

pkaler
02-20-2003, 04:28 PM
An inverse transpose per mesh and 3 dot products per vertex should be cheaper than a cross-product and normalize for each vertex.

[This message has been edited by PK (edited 02-20-2003).]

JoeR
02-20-2003, 06:09 PM
Originally posted by PK:
An inverse transpose per mesh and 3 dot products per vertex should be cheaper than a cross-product and normalize for each vertex.

[This message has been edited by PK (edited 02-20-2003).]

Yes it would be cheaper, if you need per vertex information. However, if you re-read my post more carefully you will note that we are generating a per triangle plane, so we can test front/back conditions with a light source.

Performing a per-vertex light test would be useless for shadow-mesh edge determination.

jwatte
02-20-2003, 08:03 PM
PK,

For a general transform-to-normal transform, you need the inverse transpose.

However, if you use no non-uniform scale, and dont' apply the transform part of the matrix to the normal, then you can run the normal through the regular matrix. That's why the suggestion was to ROTATE the normal by the transform matrix for the bone.

That's why you sometimes see vertices specified with a w of 1 (at that point meaning "full translation") and normals with a w of 0 (meaning "no translation") -- that way, the math just magically comes out right.

Again, assuming no non-uniform scale.

If you use non-uniform scale, you need one inverse transpose per bone matrix, and your cache will run a little warmer because you might have two matrices where previously you had one.

Last, I agree: If you want to extrude 100% correct, you end up having to re-derive the face normal on the CPU.

AMahajan
02-20-2003, 10:31 PM
Thanks for all of the suggestions guys, im already transforming all of the vertex normals in the vertex program. Calculating the triangle/plane normal and distance is gonna be much harder than I thought. I guess I might as well transform the verticies/normals on cpu and then generate the plane from there if I want accurate shadowing.

pkaler
02-20-2003, 11:49 PM
Originally posted by jwatte:

That's why you sometimes see vertices specified with a w of 1 (at that point meaning "full translation") and normals with a w of 0 (meaning "no translation") -- that way, the math just magically comes out right.


Woah! Just worked out the math. Math is cool!!!

tobiaso
02-24-2003, 07:19 AM
So what this actually means is that vertex shaders are pretty useless to use when skinning with stencil shadows....

Since you have to calculate the normals to detect the edges that form the shadow volume and you need the transformed vertex positions to construct the actual shadow volume...

or?

regards

/hObbE

jwatte
02-24-2003, 08:47 PM
You can extrude away-facing vertices, which will give you a little bit of volume shrink (depending on tesselation) but may look OK.

You can insert degenerate quads between each edge, and set each triangle up with face normals, and then extrude in the shader. This may be the easiest way to get "exact" volumes, but taxes the transform part of the card much harder, especially if you use lots of bones and high-poly characters.

You can give up, and realize that there's a reason that the CG motion picture people use shadow buffers, rather than shadow volumes :-)

PixelDuck
02-24-2003, 09:03 PM
Another suggestion: How about computing the normals for the faces and indexing them for each triangle and then rotating and testing against the light. I haven't tested it though, but if it works you get to avoid the cross product and normalization :\

Cheers!

tobiaso
02-26-2003, 02:04 AM
The problem with this is probably that each triangle can be distorted beyond recognition since different bones kan affect the vertices.... how would you rotate the normal? With what matrix?

This is the same problem with tangent space bumpmapping on skinned meshes i guess. The tangent space can be distorted since triangles will change shape totally...

or?

/hObbE

rgpc
02-26-2003, 04:29 AM
You could try transforming your light vector into object space, calculate your shadow volume and then send both the object and your shadow volume to the vertex/skinning program?

I can see problems in cases where the shadow volume might get deformed out of existence in parts of the volume, but I guess that would depend on the bone transformations...

It might be viable for some meshes and not for others.

PixelDuck
02-26-2003, 06:03 AM
tobiaso: I thought you'd say that, but how about precomputing a list of triangles that are shared by multiple bones (or flag:in them) and then blending between the results, optimal for a small number of bones per vertex. But one thing should not be forgotten, what do you do when you're skinning? Select matrices that are most likely to be multiplied and weighed with vertices in the vertex shader, this results in the fact that you don't have any information on the exact position the vertices and doing the face normal calculation under these circumstances will result in an unaccurate normal. So the only way to do it exactly is to preprocess the vertices then run the shadow creation through the data and send the vertices through the pipeline again. So little los in accuracy will occur... should occur. The tanget vectors can be multiplied with the same matrices and blended resulting in the same space as the vertices which own them. Though normal texture warping might occur, but if you have a highly tessellated mesh with multiple bones per vertex, you should get a reasonably accurate approximation.

Cheers!

[This message has been edited by PixelDuck (edited 02-26-2003).]

MZ
02-26-2003, 07:25 AM
So what this actually means is that vertex shaders are pretty useless to use when skinning with stencil shadows...
Yet another suggestion: Similar to the degenerate quads approach, but with even more brute force: Store all 3 vertex positions of triangle in each vertex (original vertex position + 2 clones of adjacent vertex positions in the triangle - it's ok because vertices must be unique per triangle anyway). Then apply skinning to all 3 positions independently, and then simply recalculate the triangle normal.

The cost is about 40% larger vertex size (extra 2 positions and weights), and each original vertex position is being skinned 18 times (x6 more vertices * 3 positions in each). But this applies only to skinning of _position_ , skinning of tangent space is not affected. So, comparing it to non-skinned shadow volume, all extra work in VP is to skin 5 vectors instead of 3.


You can give up, and realize that there's a reason that the CG motion picture people use shadow buffers, rather than shadow volumes :-)
However even with shadow maps the silhouette of mesh can be useful for some "economic" soft shadow tricks.

tobiaso
02-26-2003, 07:46 AM
rgpc:
The problem of computing the edges still remain though... since you do not have the normal of the triangles after the skinning process... this means that you will have to transform the mesh to world space on the cpu side. Calculate the triangle normals and create the shadow mesh...

not very nice when you have vertex shaders....

PixelDuck and MZ
I'm not quite sure I get your suggestions.... I'm not a guru on these things...

Also I like things simple :-)

This is my main issue with vertex shaders... Making a skinned mesh with shadow volumes seem so basic nowadays and it is a major problem upsetting the whole architecure of renderers... Weren't they supposed to make things easy...

like:
"Well, we use these nifty shaders for everything, but for the skinned meshes we do everything on the cpu side..."

or maybe it's only me...

Whoups I'm rambling....

regards!

/hObbE

jwatte
02-26-2003, 09:57 AM
tobiaso,

Two ways to get FACE normals in a shader:

1) create a special version of your geometry for the shadows. This version has slivers of degenerate triangles inserted between each edge. Each non-degenerate triangle is drawn from three verts that all have the same (face) normal. This causes a heavier load on the GPU (see the 3dmark thread for all the details you could wish for :-)

2) send "the two other" vertices along as extra vertex attributes, and use a cross product. Because different uses of the same vertex would have to have different "other two" vertices, you actually need to split up your geometry to get less sharing, and in the end you end up with the same geometry overload as in 1).

AMahajan
02-27-2003, 02:02 PM
After looking over the way the triangle normals need to be calculated for bone animated meshes, I really do think that it may be a bigger benefit to transform vert positions on cpu(keep transforming vertex normals on hardware), then use the transformed verts to calculate the new triangle normals. I already implemented this method and it isn't as slow as it may seem at first. This has the added performance bonus of not transforming verts multiple times for each lighting pass(though it still is doing it for normals, I may consider doing those on the cpu as well).

[This message has been edited by AMahajan (edited 02-27-2003).]

JoeR
02-27-2003, 06:50 PM
Generating the triangle planes from the transformed triangle mesh vertices should be straight forward as I explained in my earlier post.

The issues of vertex programs are an interesting one, so I can offer my advice on what I am currently doing with the Abducted engine.

It is possible to compute silhoutte edges in a vertex shader, but this is tricky and has the potential for a large triangle overhead when drawing shadow volumes. In a typical scene, shadow volumes can account for 1/3 to 1/2 of all triangles.

My suggestion is to break it into two stages: CPU skeletal transformation and calculation of triangle planes, and from that calculation of silhoutte edges.

By using 4 float points, you can insert 0's or 1's (as the w coord) with the CPU as inputs into the vertex program that will extrude your volumes. You can insert 1's for vertices that lie on a silhoutte boundary, and zero's for all others.

At the vertex program level you can choose to extrude your mesh by moving your vertex along the vertex->light vector a given distance. You can also insert a 0 or 1 into your w coordinate to produce "infinate" shadow volumes.

Nvidia has excellent papers on infinate shadow volumes.
http://www.nvidia.com/developer

If the code for the skeletal transforms are reasonably well optimized, then it won't be a bottleneck. Typically your scene is going to be heavily fillrate (and thusy graphics card) bound.

[This message has been edited by JoeR (edited 02-27-2003).]

JustHanging
03-03-2003, 12:31 AM
Just a quick correction to a misconception earlier on this thread. If you only use the face normals for backface culling and silhouette detection, you don't need to normalize them. A mere cross product will do just fine.

-Ilkka