PDA

View Full Version : Performance Question



Ap4thy
01-13-2006, 11:45 PM
Hi, building my small 3d engine in Dev-Cpp under WinXP with glut and opengl, i'm currently rendering about 22,070 triangles (including triangles culled, out of the viewspace, etc) at about 23fps on my Athlon-XP 3000+ with an NVidia GeForce 6600 with 1GB of memory... does anyone know if this is a good number? I'm really unsure, as far as ballpark figures i'm not sure i should be able to render about 100,000 or if 20,000 is decent. The scene consists of a small BSP map, with about 20 models that are cel-shaded. I'd also like to know if either using Dev-Cpp as my compiler or using GLUT for a 3D engine would slow down performance. I noticed that with any application i make with GLUT and Dev-Cpp the framerate seems to be capped at 61fps. Any other performance tips for a 3d engine would be great too.

jide
01-14-2006, 01:36 AM
The compiler should not limit your rendering framerate. Dev-cpp is, as far as I know, a good compiler.

glut is, in some ways, not the best solution. It's enough for starting doing some GL stuffs, or for trying other things quiete easily and fastly. But I don't think it could be responsible of slow downs.

Your framerate depends on many things. The fact that you use a BSP tree is a good thing for it if and only if the geometry is done for that (commonly indoor renderings). And current graphic card are fill-rate limited: the more things you draw, the lesser framerate you'll have, that turns out. Also, all the processes you add to simple triangle drawings (like lighting, shaders...) will slow down things more.

hazelwood
01-14-2006, 05:42 AM
Your framerate cap is probably due to v-sync. V-sync is the vertical synchronization, or in other words the driver limits your framerate to whatever your monitor refresh rate is set to (60Hz == 60 FPS). If v-sync is disabled then your program will render at full speed so it might be well possible to have something like 1000 frames per seconds.

That was a slightly incorrect explanation but it should get you going.

For windows there's an extension to toggle v-sync (look up "wglSwapIntervalEXT" on google). If you're using the GLEW library then it's a simple matter of:

if(wglSwapIntervalEXT) {
wglSwapIntervalEXT(1); // use zero or non-zero to toggle
}

memfr0b
01-15-2006, 04:00 PM
If you're actually transform/vertex upload limited, 22k Vertices with 23 FPS (~500k/sec) on that machine is a pretty bad number. On such a system, you should easily be able to push some millions vertices per second to the gpu.

But seeing that you're working with shaders, the number of vertices might not be your bottleneck at all. So the first thing to do would be to determine where your applications bottleneck is: CPU, upload bandwidth, transform or fill rate.

If the number of vertices per second really is your problem, you should take a look at vertex arrays and vertex buffer objects.