Stencil shadow info

From what I gather from the NeHe tutorial, to draw a stencil shadow, you basically just project every edge of the mesh in one direction. If you had a triangle between A, B, and C, you would create a quad out of A and B, and two other vertices offset by the vector of the shadow.

I’m rather surprised that you actually have to adjust the vertex array in real time. Isn’t there a way to do it so that this is automated somehow? Is that really how it is done in Doom 3?

Doom does this in software but it optimizes the resulting beam tree heavily. Also if the light and geometry don’t move you don’t have to recreate the silhouette.

It can be done automatically by creating degenerate edges and extruding two of verts into a quad in a vertex program based on normal dot products with view vector from the contributing faces, but this imposes a huge geometric overhead drawing two triangles for every edge in your model, although most are rejected as degenerates there is a significant bandwidth and transformation overhead.

but this imposes a huge geometric overhead drawing two triangles for every edge in your model
Please explain how doing the same in software would not create the same overhead of drawing two triangles per edge.

In software only the silhouette triangles need be sent, but in hardware all degenerate edges must be sent through the vertex program.

How do you tell which triangles are silhouette triangles? I suppose you could find the plane equation of each triangle, transform it to camera space, and look for edges that lie between a forward-facing triangle and a backwards-facing one…but I don’t really see that operation happening in real-time.

You check adjacent face normals against the view vector.

In the case of a fragment program you extrude an edge vertex pair or not on the basis of that dot product (there are two vertex pairs per edge with different normals, i.e. two identical edges running in opposite directions forming a degenerate quad with normals from different faces), in the case of software you can know the result for both faces and reject or extrude an edge based on both results.

No wonder Doom 3 characters are so low-poly.

In HL2, did they use stencils or a projected texture?

The issue isn’t as straightforward as that. The lighting pass accumulation of Doom3 and the consistent shadow processing made stencils the right choice for that engine. Image based approaches would have had resolution, filtering and performance issues although that’s now viable on newer hardware.

You know, I could use a super low-poly stencil mesh to cast shadows with. It would cut the processing time down a lot, and would look about the same when projected as a shadow.

Low poly shadow meshes have their own problems (bad self shadowing). Don’t try to re-figure out everything. Shadow volume techniques are well researched, just read the papers:
http://developer.nvidia.com/object/robust_shadow_volumes.html

I guess halo said that he would use a low poly for the shadowing not for rendering the model.

I don’t really need the objects to self-shadow. I’m writing more of a HL2 engine than a Doom 3 one.