Ext::VertArrays:: Multiple TriStrips instead of TriLists?

Can any hardware folks comment on a future inclusion of a series of tri-strips in vertex arrays?

The GL_IBM_multimode_draw_arrays is a good example of this.
However, since its strips that are the ultimate in performance, all we’d need would be the ability to include index-strips and a terminator between successive strips. Then we could send a ton of strips all at once-

Seems like it should be relatively easy on the driver side (just cycle the vertex cache fetching 1 vertex per triangle in the strip. At each terminator(actually each beginning), fetch the next 3 vertices-

It would at least save ~1/3 the bandwidth of tri-indices (which maybe is nominal) and keep the vertex cache coherent.

Hmmm, can wonder if the memory from NV_Array_Range can be used for indices?

Do not use vertex_array_range memory for indicies. I’ve profiled an app that spends most of its time in the nVidia driver, and it reads out the indexes you pass into it. Reading from uncacheable memory is Bad™.

Specifically, when you pass GL_UNSIGNED_SHORT indexes in, there’s one loop which converts short to int that gets run and takes 8% of the apps’ time, and another loop which converts int to short that get run and take another 8% of the apps’ time.

When passing GL_UNSIGNED_INT, the first loop goes away, but the second loop (actually, it’s a pair of loops which do the same thing) still take up this time.

Even when drawing display lists, that second loop of converting int to short gets run. If only I could provide GL_UNSIGNED_SHORT and have none of these conversion loops run :frowning:

(this is using 6.50 drivers on a GF2U on W2K by the way)

[This message has been edited by jwatte (edited 03-07-2001).]

Thanks for the info jwatte~

Does this apply to Compiled Vertex arrays or just standard Vert Arrays (which it is know that they must pre-scan the verts referenced by the indices) or the Draw_Array_Range?

Hopefully Nvidia people have taken notice-
Since DX8 has index buffers, i’d expect there to be a similar mechanism extended to GL.

It would be nice to be able to “lock” the index and vertex data telling the card that I won’t change it until I “unlock” it. I couldn’t find an extension which does that for indices; those are passed as direct primitive arguments rather than pre-set using glXxxxPointer().