Strange performance with vertex arrays...

Hello !

I have 2 computers:

  1. PIII@600Mhz with GeForceFX5200 128bit 128MB
    AGP2X (detonator 56.54)
  2. AMD Thunderbird@1400Mhz with GeForce 256DDR
    AGP4X (detonator 56.56)

Here is how I set up the vertex arrays:

glInterleavedArrays(GL_C4UB_V3F, 0, dataBuffers);
glDrawElements(GL_TRIANGLES, (heightTex.Width()-1) * (heightTex.Height()-1) * 6 , GL_UNSIGNED_INT, (GLuint*)indexBuffers );

I have I have 128128 vertices and 127127*2 triangles. I do not have texturing or lighting.

I get 72 fps on the first system and 95 on the second system… why ?
The data transferred is small, so the AGP or fillrate don’t count… The application is clearly transform limited, as I get the same framerates with all framebuffer sizes.

Wasn’t the 5200 supposed to be faster than the geforce256 (at the T&L)?

Am I doing something wrong or the 5200 is so slow ? (I know it isn’t the fastest card out there but I did expect better performance than a geforce256. I have exchanged my old 5200 64bit with a new 5200 128bit because the 5200 64bit was seriously fillrate limited)

I hope I am not off topic…

Multiple options:

  • You forgot to disable wait for vsync and one monitor is running with 72 Hz, the other with 95 Hz?
  • The Pentium is about half the MHz.
  • The Pentium has only AGP 2x.
  • The drivers are different.

If it was not the first, you can still try to swap the boards or drivers.

Looks like this draws a grid. It’s much more efficient to use GL_TRIANGLE_STRIP for this.