VAR optimization or depth sorting?

I’ve read in a topic that to get maximum VAR performance I have to render 4096 (implementation dependant?) vertices with one glDrawArrays call.
If I’ve got a few objects with lets say 500 vertices, I have to render several objects with one glDrawArrays call to get maximum performance. In this case I have to sort objects by texture, and then render a few objects with the same texture.
My problem is that in this case I can’t use depth sorting to eliminate non-visible pixel drawings.
So my question is which method would be faster (depth sorting or VAR optimization), if I’m writing an isometric engine?

That depends on whether you are fill rate or vertex transfer/transform bound.

The only way to know is to implement it both ways, and profile on actual target hardware and target data.

I was afraid of that… But thanks, I’ll try both.