After reading Nvidia’s presentations from the GameDeveloperConference, i have picked up conflicting information. In Richard Huddy’s presentation he recommends using Degenerate Tri-Strips (via indexing) while the OpenGL optimizations indicate to never use Degenerate Tri-Strips. Perhaps that indication is for non-indexed strips?
It seems that if your mesh data is already arranged as a batch of Tri-Strips, then connecting those batches via a degenerate tri format reduces index-bandwidth and gives the driver less to fetch per-triangle into the cache-
Any clarifications or recommendations would be helpful- ATI? Matrox?