View Full Version : 25 millions of triangles / seconds???

12-02-2001, 06:05 AM
Hi, i'm french :-)
Well, i use to hear that the video card can draw 25 millions of triangles every seconds.
Ok but when i try it with my voodoo4 it is hard to display less than 1 million of triangles every seconds !!
What's the problem???

There, it's my procedure.
PS : BUFFER[][] is an array where i store all the vertices...




for(int i=0;i<230000;i++)




glutSwapBuffers ( );


Thank you for helping me.

12-02-2001, 08:41 AM
You cannot reach good performance with immediate mode, besides, Voodoo4 has no T&L processor, so this work is done on the host CPU, GeForce cards have such a processor so they do such calculations on a chip, offloading the CPU.

You have to use vendor-specific extensions to get the max performace. with VAR I'm getting up to 15 mio tris/sec on my GF2 MX, GF3 can handle even more.

P.S. I'm not french, but I wonder if it makes any difference, I guess not http://www.opengl.org/discussion_boards/ubb/smile.gif

[This message has been edited by Lev (edited 12-02-2001).]

12-02-2001, 08:44 AM
Well, 25 millons is a best case performance (by the way I don't think a voodoo4 can push as many vertice every second).

The best case is using vertex arrays with tri-strips, which means you only have one call to glDraw* and then the graphics card will pull the geometry using DMA.

Moreover, using tristrips, you only specify 1 vertex per triangle. You call glVertex 3 times for every triangle you render. All this makes a big difference.

By the way I am french too http://www.opengl.org/discussion_boards/ubb/smile.gif

12-02-2001, 11:34 AM
Thank you.
I didn't know that my sample was in immediate mode...
OK now i understand why it is slower than direct3D, it's because with d3D it was in retained mode or something like that.
I stored all in some buffer and at the end i used DrawindexedPrimitive for instance.
year ok ! But i have not found a sample about that in OpenGL.

Do you know a where can i find that ?

LEV>P.S. I'm not french, but I wonder if it makes any difference, I guess not<
No it doesn't LOL !

12-02-2001, 12:16 PM
What about the red book?
http://earth.uni-muenster.de/ebt-bin/nph...l;pt=69;lang=fr (http://earth.uni-muenster.de/ebt-bin/nph-dweb/dynaweb/SGI_Developer/OpenGL_PG/@Generic__BookTextView/73;cs=fullhtml;pt=69;lang=fr)

Chapter 2, vertex arrays.

12-02-2001, 12:28 PM
Puisque u speak french...
Merci beaucoups pour ton exemple je sens que ça va beaucoups améliorer la vitesse de rendu de ma scene !

@ Bientot !

12-11-2001, 07:26 AM
I'm not French either but I have noticed that graphics adapters rarely go as high as what is written on the box. My Geforce 256 does about 2'000'000 polygons per second at 640x480x32 resolution. And that is with an unoptimised GLUT application. I guess you can go higher if:

1. You divide your geometry up with bsp or octrees.
2. You write it in assembler.
3. You use triangle strips.
4. You dont use too many too large textures.

12-11-2001, 07:47 AM
1. You divide your geometry up with bsp or octrees.
2. You write it in assembler.
3. You use triangle strips.
4. You dont use too many too large textures.

To 1:
this will no help to improve the throughput
To 2:
Why? This won't help either, the bottleneck is OpenGL card and not the app I assume. Forget aseembler.
To 3:
True! This will speed things up!


12-13-2001, 06:34 AM
Lev, what do you mean dividing up geometry will not accelerate rendering? I have divided up a small terrain world into lots to only render those which are in the view frustum and I get a terrific rate increase. That is what a bsp tree does.

As for assembler, I cannot write assembler code but I have seen numerous articles on flipcode which seem to indicate that it is better for fast rendering. Take a look at some of the discussions:

12-13-2001, 07:41 AM
assembler has nothing that could speed calls to openGL libs :-)
It would only be more accurate to test either or not you use correctly agp, etc, by checking control registers, but otherwise...

it just worths the effort for inner calculus or stuff like that, wich is not the point if you benchmark the videocard.