Tri strips VS indexed vertex arrays.

I am currently designing a program that is going to use highly tesselated triangle meshes. I am attempting to make it run as fast in real time as possible. I have played ALOT with Indexed vertex arrays in the past, but was wondering how much faster it would be to run tri stips instead. I am currntly in a situation where each point is shared among 3 or 4 tris. So when i render with indexed arrays, i am saving quite a bit of data from having to be rendered, and proccessed. BUT are those points simply proccessed anyways, meaning, do the points I send in, if they are called 12 times in the index, are they transformed and drawn 12 times or only once? And if only once, how is Tri Stripping any faster?>?>?

As far as I’m aware:-
Some GPU’s have a cache of previously transformed vertices - maybe the last 6 vertices it’s encountered while dereferencing the index array/traversing the vertex array. So, I suppose if tri-strips are used, then by definition the cache is used most effectively (a good % reuse of previously transformed vertices - ie. the last 3 vertices are used per triangle in a tristrip).

BTW, it’s not a choice between indexed arrays and tristrips - you can have an indexed tristrip. You should use index arrays almost always, because you’re bound to have shared vertices - so not only are you (maybe) saving on transforming a vertex twice, but it also saves bandwidth by transferring less data across the bus.

[This message has been edited by knackered (edited 04-24-2003).]

Ok, then does anyone know a good library (preferably free) that will stripify a random input of triangles (indexed) quickly? Most of the time my Meshes will be static, but there will be times when they change so i would like to change them and stipify them as quickly as possible. Any good libraries to use??

I heard nVidia had a tool called nvTriStrip to do this but I never really get a grasp on it, it’s also some time that I don’t hear nothing about this…

I fear strip-ifing will be slow.

I recommend this one:- http://users.pandora.be/tfautre/softdev/

But you could also use the NVidia one:- http://developer.nvidia.com/view.asp?IO=nvtristrip_v1_1

I tried and liked this one:
http://www.plunk.org/~grantham/public/actc/
by Brad Grantham

it’s got an OpenGL-like interface (i.e.
begin … end)

Very nice…

Originally posted by Obli:
I fear strip-ifing will be slow.

It is, indeed, very slow. I tried nvTriStrip and it took ages!

First off, thank you all for the help, much appreciated, but I would realy like to thank knackered, that library looks like the best so far, (I did like the last one someone mentioned, but i was reading through its specs and it mentioned it might not be compatible with some systems based on the timer function, and im trying to be VERY open platform).

Secondly, i have another quick question, is there an opengl call that can be made to return the vertex cache size on the specific card the user is using? I would like to make my strips as cache friendly as possible in run time. thankx again.

EDIT:
Also not to sound stupid, but I got another question for you knackered. Im having a hard time figuring out how to run that particular stripper. Would you be able to show me how to set that up, I currently have a std::vector of floats (3 a piece) for my points, and an index (ints) for my triangles built from the points. But im not sure how to pass them to the tri stripper nor am i entirely sure how to retreave the strips returned. I know seems stupid of me, but im realy not that great a classes, and im having a hard time decifering the functions throught the classes he used. (This is why i only use structs lol). Thankx.

[This message has been edited by dabeav (edited 04-25-2003).]

>>>Secondly, i have another quick question, is there an opengl call that can be made to return the vertex cache size on the specific card the user is using? I would like to make my strips as cache friendly as possible in run time. thankx again.<<<

hehe! No there isnt but I suggested this one the “suggestions for future GL” forum a while ago.

I think it was in some NV document that said the cache can hold 16 vertices (or it said 10 vertices effectively)

Very useful - will check out more in detail ASAP!

What kind of algorithms those strippers employ? I hear of a “SGI method” and another one, maybe it was called “ERIT’s method”.

Hi,

for what I remember, the GeForce 1/2 has this vertex cache size of 16 (10 effective) but the GeForce 3/4 has a size of 24.

I don’t have a clue for GeForce FX or ATI boards - but this could be tested or found in various tech docs.

Cheers,
Nicolas.