PDA

View Full Version : VAR optimization or depth sorting?



Catman
05-04-2002, 10:20 AM
I've read in a topic that to get maximum VAR performance I have to render 4096 (implementation dependant?) vertices with one glDrawArrays call.
If I've got a few objects with lets say 500 vertices, I have to render several objects with one glDrawArrays call to get maximum performance. In this case I have to sort objects by texture, and then render a few objects with the same texture.
My problem is that in this case I can't use depth sorting to eliminate non-visible pixel drawings.
So my question is which method would be faster (depth sorting or VAR optimization), if I'm writing an isometric engine?

jwatte
05-04-2002, 04:34 PM
That depends on whether you are fill rate or vertex transfer/transform bound.

The only way to know is to implement it both ways, and profile on actual target hardware and target data.

Catman
05-06-2002, 02:42 AM
I was afraid of that... http://www.opengl.org/discussion_boards/ubb/wink.gif But thanks, I'll try both.