Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 2 of 2

Thread: Degenerate Tri-Strips vs Tri-Lists Again

  1. #1
    Intern Newbie
    Join Date
    Jan 2001
    Location
    Nowhereland
    Posts
    38

    Degenerate Tri-Strips vs Tri-Lists Again

    After reading Nvidia's presentations from the GameDeveloperConference, i have picked up conflicting information. In Richard Huddy's presentation he recommends using Degenerate Tri-Strips (via indexing) while the OpenGL optimizations indicate to *never* use Degenerate Tri-Strips. Perhaps that indication is for non-indexed strips?

    It seems that if your mesh data is already arranged as a batch of Tri-Strips, then connecting those batches via a degenerate tri format reduces index-bandwidth and gives the driver less to fetch per-triangle into the cache-

    Any clarifications or recommendations would be helpful- ATI? Matrox?

  2. #2
    Senior Member OpenGL Guru
    Join Date
    Mar 2001
    Posts
    2,411

    Re: Degenerate Tri-Strips vs Tri-Lists Again

    How about writing a test case and timing it? I'd be very interested in seeing what results you'd come up with.

    I also believe the main penalty is in the set-up overhead per call to DrawElements(), rather than the extra two vertexes to start a new strip, at least for hardware transform and lighting.
    "If you can't afford to do something right,
    you'd better make sure you can afford to do it wrong!"

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •