PDA

View Full Version : Building an adjacency graph



NordFenris
01-04-2003, 04:25 PM
I'm trying to speed up our shadow volume code, and as such, we need to find the mesh silhuette (sp?). We were thinking of taking the culled-neighbour approach (any edge having one culled and one unculled triangle is considered part of the silhuette)

So, we set out to build an adjacency graph, to see which polygons neighbour which. I tried the brute force approach, by looking at each polygon, and then sifting through the poly buffer, and see which polygons share more than two vertices with that polygon. After that, there'll be some sorting, but already here I've stumbled upon a problem. When trying to build the graph for a 2000 poly model, I quickly end up in 36.000.000 vertices that have to be checked (n^2 * v^2, where n is numOfPolys, and v is numOfVertices).

Is there a better way to build a graph like this, or is this headbangingagainstthewalluntileithergivesin method the only known way?

Of course, I need to ask - is this how the shadow silhouette is usually calculated? Or have I made a huge mistake?

Thanks for any input!

dorbie
01-04-2003, 06:55 PM
I think so, but it depends on the model to a degree (hints can be built in).

If you're familiar with the marching cubes algorithm, it marches along the isosurface of a volume by testing values against a threshold and avoiding voxels in the boondocks but it relies on an initial hit and simple topology.

I think you could take a similar approach with a silhouette of a model. Once you find an edge where each adjacent facet has reversed sign dot products with the light vector then you could march along connected facets until you complete the loop.

In addition any holes would have to be marked as hot and you'd search for additional silhouettes (this is what I meant by hintsabove).

I don't think you can make it cache coherent for all orientations (or even most), but at least you're not touching everything.

jwatte
01-04-2003, 07:06 PM
Assuming you have unified verts (no two verts with different index have the same position), this is linear in time and requires vert-squared of virtual storage (of which only linear will actually be touched, so it can be optimized using various sparse storage mechanisms).




This is assuming you have closed manifolds made out of single-sided triangles. Extend to deal with open edges as necessary (open edges == bad for stencil volumes).

Allocate and clear VertToPolyMap[V][V].

For ThisTriangle over all triangles:
For each ordered edge A->B over ThisTriangle (three edges)
assert( !VertToPolyMap[A][B] );
VertToPolyMap[A][B] = ThisTriangle
if( VertToPolyMap[B][A] ) {
Triangle[VertToPolyMap[B][A]]->addNeighbor( ThisTriangle );
Triangle[ThisTriangle]->addNeighbor( VertToPolyMap[B][A] );
}


Actually creating uniform vert positions from "split" verts (verts the way you'd pass them to GL) can be done in about O(V) as well, using hashing on the vert position value.

jwatte
01-04-2003, 07:09 PM
Clarification: By "this is linear in time," I mean the operation the original requester asked about: building the initial adjacency graph (which can be done once, as an offline thing).

tarantula
01-04-2003, 11:40 PM
Is there someway to speedup calculation for large models based on the fact that the light's position does not change abruptly? Lots of silhouette edges will be same between two consecutive frames and the rest will probably move by a triangle. But if the model is concave lots of new silhouette edges might get added in a frame. Probably that's why no one cares about this.