View Full Version : geforce performance

12-18-2003, 05:01 AM
Hi all.

I'm working with a geforce ti4200 under linux, with latest linux drivers, and I have a bit trouble with optimization. None of this questions is related to this drivers' version, because I have noticed that for a long time.

First. I have a performance down of about 10% under linux in comparison to the windows version.

Second. When doing speed tests with some objects, I have noticed that the speed is not linear across the number of objects. Example: I have one object with 4500 polygons, and I'm using Vertex Array Range, with NvStripLib in proper index order. The object has about 5 materials, none with bump, only specular per pixel, diffuse, reflection, and nothing more. I can render about 250000 polygons with about 35% of my computer time (I'm using a celeron 1200, with 100mhz bus), BUT...when I add one more object, my process time downs to 98%!!!

My vertex array range allocates about 20 MBYTES (this is fixed), and my indexes are integers (I have to do that in this way because I don't want to change base address for vertexs...because a loose performance)

Any idea? My performance is good? Why is happening that???

See you!

12-18-2003, 05:04 AM
Hi all... http://www.opengl.org/discussion_boards/ubb/wink.gif

I found that...most of the triangles were back face culled, so when my objects moves (they were rotating), most of the triangles were VISIBLE.

I have 220000 polygons per frame, at 60hz, under linux. That's a good performance?

See you!

12-18-2003, 05:06 AM
220000 per frame, and 90% visibles...

12-18-2003, 05:19 AM
Hi...it's me again...

I tried that without back face culling AND I STILL HAVE THE SAME PROBLEM!

250000 polygons...more or less..takes me about 35% of the process time (from beginning of viewport setting, to render end), and when I add another object...the same...it goes down to 98%.

I'm counting the render too, because it blocks at the end because of a occlusion query...so ...

Any idea?

See you!

12-18-2003, 05:27 AM
Disable vsync. Either do so via GLX_SGI_swap_control (http://oss.sgi.com/projects/ogl-sample/registry/SGI/swap_control.txt) or through a global driver switches (if such a thing exists on your machine).

Before you do that none of your benchmarks will be particularly useful. VSync on will spoil all your numbers. It may be that you just dropped from 65 fps to 58 fps, and due to VSync that became 60 and 30 fps respectively. So switch it off http://www.opengl.org/discussion_boards/ubb/smile.gif

On a sidenote, it's beyond me how anyone can stand a 60 Hz screen refresh ... well, except for TFTs, of course.