View Full Version : benching the GL

07-04-2010, 09:08 AM
When I want to measure fps I often do something like this in the DisplayFunc:

#ifndef NDEBUG
boost::timer timer;
#endif // NDEBUG




#ifndef NDEBUG
glutSetWindowTitle(cformat("%.2f", 1 / timer.elapsed()).c_str());
#endif // NDEBUG

The boost::timer comes from the boost library and is a wrapper around clock(), cformat() is a wrapper around vsnprintf().

The problem is that many times, on a fast graphics card, I get infinite fps as a result. That is, it is as if no time has elapsed in the rendering loop. Is there a portable way to achieve accurate timing? Should I use the GL timer extension?

07-04-2010, 09:41 AM
From a cursory glance on the Boost doc, this timer.elapsed() returns time in INTEGER SECONDS ! Expecting to do something usefull with that on a single frame is beyond me ...

You may want to average fps during one second, ie. count number of rendered frames and display this number each time timer.elapsed() changes (each second).

07-04-2010, 11:56 AM
You are wrong:

double elapsed() const; // return elapsed time in seconds

Nothing prevents this method from returning say .001 seconds or .000001 seconds as the elapsed time and it is returning a double thus allowing high resolution. The fps average idea is good, but how to handle infinite fps?

Alfonse Reinheart
07-04-2010, 12:56 PM
The fps average idea is good, but how to handle infinite fps?

1: Stop using a bad timer. Boost.Timer is a bad (or at least unreliable) timer.

2: Stop measuring frames per second. If you're only going to look at one frame, the useful measurement is milliseconds, not FPS.

07-04-2010, 03:03 PM
Can you suggest a portable timer I should use for more reliable time measurements?