Hi everyone

I'm trying to figure out a strange behaviour of my code. 2 Models almost equal that differ in specifying their vertexes in a spatially sorted and unsorted manner, shows very different performances.

I got 2 models:
1) let's called it 'original' made of about 640K vertexes.
2) a LOD of the original. I obtain it by an octree subdivision...it has almost 600K vertexes.

Most important, I'm performing a point based rendering, using point sprites (oriented disc taken from a texture atlas). Every vertex has its normal and color that are loaded in 3 different VBOs.

My code is quite simple:

activate depth test
activate alpha test (set function)

//render loop
clear color and depth buffer
activate shaders
do the rendering (point_sprite etc etc render call etc etc) -> then up to the vert and frag shaders
disable shaders

Recording in a benchmark the time needed for execute each frame I noticed that the Original model perfom better than its LOD...2 times better at least...
When loading the two different models the procedure is exactly the same and nothing else changes...
Playing around with the code I noticed that my LOD has the vertexes specified in a certain spatial order (as result of the octree subdivision) while the original no....
If I shuffle the vertexes positions in my LOD the result is that their performances are comparable as one would expect...

I made further tries and I noticed that by disabling the depth test, the LOD model appears more 'consistent', like its front face is compact, while the original is far more fuzzy...
is it because the first (or last) vertexes drawn are contiguous?
Many inevitable artifacts affect my rendering like aliasing...I was wondering If some of these issues may discard many fragments and lead to a faster rendering...while the LOD being sorted is less affected by them but slows down...

If you can help I would be very grateful (it's an urgent matter)

best regards