PDA

View Full Version : Optimizing OpenGL & FPS



Magnus
08-15-2000, 02:18 PM
I have a simple question on how to increase the FPS in OpenGL.

I can write a simple basic Win32 application and all it would do is swapbuffers() in the winmain cycle. I have set up an acurate FPS counter and all I can get is 58 (p3 500 w/GeForce DDR). How is it that games like quake etc. can get over 200?

I can comment out the swapbuffers line and get over 80,000 FPS.
So how do I accelerate the swapbuffer command? I'm guessing I have to use some hardware extension or SOMETHING? Please help. Where can I learn what I have to do. How bout an example or something. Thank you very much.

- Keith

cire
08-15-2000, 02:55 PM
Hello Keith,

Would you mind posting a bit of your code so we can see what you're doing? My guess is that your FPS code is a bit messed up ...

btw, what's the refresh rate of your monitor?

Eric

Magnus
08-15-2000, 03:12 PM
time_start = timeGetTime();

while (!Sys.shutdown)
{
// standard message que
if (PeekMessage(&msg, NULL, 0, 0,PM_REMOVE))
{
if (msg.message == WM_QUIT | | Input.GetKey(VK_ESCAPE))
Sys.shutdown = true;
else
TranslateMessage(&msg);
DispatchMessage(&msg);
}
else
{
Input.Update();
Console.Update();

if (Sys.active)
DrawGLScene();
frames++;
}
}

time_finish = timeGetTime();
sprintf(fps, "FPS: %f", (float)frames/((time_finish-time_start)/1000.0f));
MessageBox(NULL, fps, szAppName, MB_OK);

void DrawGLScene (void)
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Clear Screen And Depth Buffer
glLoadIdentity();
Console.Draw();
SwapBuffers(window.hDC);
}

input.update and console.update are empty functions.

Refresh rate of monitor is 75hz i believe. But I tried it on my laptop and got 50FPS. All I'm doing in my loop is calling that swapbuffers. WHen I comment out the swapbuffers line my FPS is in the tens of thousands.

- Keith

[This message has been edited by Magnus (edited 08-15-2000).]

AndersO
08-15-2000, 10:58 PM
Maybe you could try and turn off vertical sync. Look in the displaysettings for your card.

skippyj777
08-16-2000, 02:09 PM
Here is how I calculate the FPS:

Have two integer variables, one called FrameCount and another called FrameCountStart.

When your app starts, reset FrameCount to zero and FrameCountStart to the current GetTickCount().

At the END of a frame, increment FrameCount. To calculate the average time (in milliseconds) it took to render a frame, use this formula: ((GetTickCount() - FrameCountStart) div FrameCount).

If GetTickCount resets to zero, you will need to reset FrameCount and FrameCountStart just as you did when the app started. One other place I like to reset the two variables is when the render window is resized.

Note that this FPS method is an average instead of an instant value. GetTickCount() is not accurate over short periods of time, but an average is accurate over .5 sec or more. You may want to reset the FPS variables every two seconds or so to give the most up-to-date FPS value.

Magnus
08-19-2000, 08:13 AM
Well I figured out its not my Swwapbuffers() command thats causing the problem. If thats all I do in my loop I get approx 250 FPS, but when I start glClear() and glLoadIdentity() thats when my fps goes LOW! http://www.opengl.org/discussion_boards/ubb/frown.gif http://www.opengl.org/discussion_boards/ubb/frown.gif http://www.opengl.org/discussion_boards/ubb/frown.gif

Any ideas?

- Keith

El Jefe
08-19-2000, 06:36 PM
Well, for starters, Quake3 doesn't seem to clear out the color buffer. Which is why if there are vis problems or you noclip out of the world, you end up with hall-of-mirror type effects. This could possibly give you a slight speed up.

Additionally, you can organise your engine so that you minimize GL state changes. Perhaps as a quick test, bind one texture at the start of your render loop, render all polys using said texture...then see if you don't get quite a frame rate increase. If you do, then one solution is to sort your surfaces by texture...then only bind a texture when the texture in the sorted list changes.

Gorg
08-19-2000, 07:47 PM
Clearing the color buffer is useless if you have a "background" like a sky box. Since you are sure the whole screen will get a "base color" every frame, then clearing the color buffer is useless and saves you a lot of times.

Or if you need to clear the color buffer, something that will help is clearing it a while before you actually start drawing. Since the clearing the color buffer takes a lot time, then all of the subsequent graphics call will just stall.

The best thing do is :

clear the buffer
do the logic
drawstuff

Kilam Malik
08-20-2000, 11:08 PM
Just an idea... would it be faster, if I draw a rectangle (with orthographic projection and 2d screen coordinates) which fills the hole window in my backgroundcolor instead of using glClear? I would have to turn off the write to z-buffer before. But on accellerated machines i think it should be faster.

Another thing. I read about speeding up z-buffer in software engines. They don't clear the hole buffer each frame but only parts of it. And they use 1/z instead of z-values. I think, they put higher values in the z-buffer each frame, so that they can't overlap with the previous frame. Does the openGL drivers do that already?

Kilam.


Kilam.

Gorg
08-21-2000, 03:39 AM
no, it won't be faster. It will even be slower, because the opengl pipeline have to process the created fragment. When you call glClear, it can bypass all of that, and just change the value in the buffer. It is still the way to go if your huge quad is somekind of usefull background, because you are still saving on the Clear call.