View Full Version : glElementArray performance issue

09-07-2005, 05:38 AM
Hi there,

As we can see over the internet, glDrawElements seems better than glDrawArrays.

In my testbed, i use 40000 calls to glDrawElements and the vertex and element arrays are buffer objects created with the static draw flag for maximum performance.

If I draw a 4 component element array, then the framerate is OK. If it is a 200 component element array, the framerate drops. It seems that I am CPU bound. So glDrawElement takes a proportional time according to the number of element array components, probably because elements are sumbmitted to the GPU by CPU.

So I came to the idea that, if the geometry were a perfect strip, it would be faster to use glDrawArrays that should perform in a constant time on the CPU side.

Am I right ?
Thanx, pakou...

09-07-2005, 08:17 AM
what's the static draw flag? (sry for not being helpful)

09-07-2005, 09:21 AM
Am I right ?The NVIDIA documents themselves, for example, point out that strips can see good cache usage in conjunction with drawarrays, so there's really no mystery here. You own measurements should be enough to go on if there's some doubt. Re-read those documents if you missed that bit, and look at the suggestions concerning vertex size and index type, in addition to the many sections on batch heuristics.

09-07-2005, 11:49 PM
I think I was wrong when I said I was CPU limited because if you compute the number of vertices sent in my testbed it is something like 40000*200 so 8000000 poly per frames. I think I am GPU bound (TnL).

Actually my post was to point out that if we want to render a lot of objects with a lot af VBO sharing, there is obviously a lot of glDrawElements and usually the application becomes CPU bound. I was benchmarking glDrawElements versus glDrawArray in that case.

I will look into NVIDIA docs but usually, they say use glDrawElements rather that glDrawArray because of vertex cache issues. But It seems that games are always CPU bound.

For the "static draw flag" ie GL_STATIC_DRAW, I'm a little bit new to GL but it seems that when you call glBufferDataARB(), the usage parameter is a hit to store the data on the GPU memory rather than on CPU.