Slow polygons/s perf. on non-GeForce system

I’ve been making a program using a dynamic object composed of multiple triangle strips.
I use it to generate a wave effect that’s quite CPU intensive.

However on my P3-700 GF2-Ultra, I get 33fps when everything is displayed, and about 150fps when I only tested the polygons rate (about 2,000,000 per sec) which isn’t too bad or exceptional for a GeForce.

Now when I run this app on anything else, such as a V5 or a Radeon, it only gets about 10fps, even when I don’t compute the wave effect, which is about 200,000 polygons per second.

Now that’s kinda poor perf. I was wondering if there was a specific way to send the data in immediate mode (I used pure immediate mode with glVertex* and now I’m using Vertex Arrays (with normal and tex. coord arrays too)).

Here is two pics (first in wireframe, the second with everything enabled) so u can get an idea of the thing: http://users.pandora.be/tfautre/Images/gl_wave.png

http://users.pandora.be/tfautre/Images/gl_wave006b.jpg

[This message has been edited by GPSnoopy (edited 02-19-2001).]

Cool pictures.
Why not put the vertex array in a display lists?
I do not know of any specific ways to send the data in immediate mode. You can of course use vertex arrays in many ways.

Actually, display lists aren’t great for dynamic data.

For GeForce cards, the vertex array range extension gives the best performance.

For other cards, vertex arrays should be the fastest for dynamic data. I can’t see why your performance is that bad.

j

Here is a bit of my code (the WinAPI part is based on Nehe’s tut 6)

here is mainly the init function:

glEnable(GL_TEXTURE_2D);
glShadeModel(GL_SMOOTH);
glClearColor(0.0f, 0.0f, 0.0f, 0.5f);
glClearDepth(1.0f);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);	
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);
glPolygonMode(GL_FRONT, GL_FILL);
glPolygonMode(GL_BACK, GL_LINE);
glCullFace(GL_BACK);
glEnable(GL_CULL_FACE);
glLightfv(GL_LIGHT0, GL_AMBIENT, LightAmbient);	
glLightfv(GL_LIGHT0, GL_DIFFUSE, LightDiffuse);	
glLightfv(GL_LIGHT0, GL_POSITION,LightPosition);
glEnable(GL_LIGHT0);
glEnable(GL_NORMALIZE);
glEnable(GL_LIGHTING);	
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);

for rendering I do the following:

for (… /* while there are still data to compute /) {
/
Compute wave effect on the current triangle strip /

/
Send data to render */
glVertexPointer(3, GL_FLOAT, 0, &Vertexes_Array[k]);
glNormalPointer(GL_FLOAT, 0, &Normals_Array[k]);
glTexCoordPointer(2, GL_FLOAT, 0, &Textures_Coord_Array[k]);
glDrawArrays(GL_TRIANGLE_STRIP, 0, row_size * 2);
}

I hope it’s understandable
The perf. hit doesn’t come from Normalizing (I need it anyway).
More and more I believe that the GeForce is the only card out there that handles correctly the immediate mode.

  • Don’t clear with alpha = 0.5, some hardware might be faster if all values are zero.
  • If you have back face culling enabled, don’t set glPolygonMode(GL_BACK, GL_LINE). You won’t see the polygons.
  • Keep front and back modes equal, that could be faster on some implementations.
  • Use glDepthFunc(GL_LESS) instead of GL_LEQUAL. If there is no special reason for the equal test, you won’t notice a visual difference but might get a few more depth fails which is good for fillrate.

Thanks Relic for the tips.
For glPolygonMode(GL_BACK, GL_LINE) I was using that to see if I was correctly creating the triangle strips, respecting the trigonometric rotation. But now you’re right it’s no use :stuck_out_tongue:

But the fps increased only by 0.5/1fps on the GeForce, and the V5 is rendering it at ~10fps instead of ~9fps.
I wish there was a 3dfx technician here.