OT: C++ peformance timers.

I am looking for an open platform way to check the time in very small intervals. Basicaly, im looking to create a effiecent FPS counter for my engine, (among other things) and im looking for a timer that will run on ANY OS, so i dont have to mess with platform code (the reason I use OpenGL) Thanks

There is no such thing as a platform independent time function, except for time() (which is only “seconds” accurate).

ftime() is somewhat more accurate and a little bit portable. However, it suffers from the general TickCount() drift problem, caused by ISA interrupt controller legacy madness AFAICT.

The RDTSC instruction is portable to all user-mode x86 operating systems, assuming you can find a portable assembly syntax (I like NASM for that reason). Of course, then you’re faced with trying to figure out the CPU speed, which is, uh, “hard” on SpeedStep chips.

What I would do is to define an abstract interface that returns time in some useful unit (say, seconds as a double) and then use #ifdef for each platform.

Note that there isn’t really any such thing as portable code, only code that has been ported :slight_smile:

What about the clock() function?

eh?

So, on my system …

1% man clock

NAME
clock - analog clock in a window

SYNOPSIS
/usr/sbin/clock

or …

DESCRIPTION
CLOCK obtains the current time, in ASCII hh:mm:ss format, from the
real-time clock

or …

SYNOPSIS
#include <time.h>

 clock_t clock (void);

DESCRIPTION
clock returns the amount of CPU time used since the first call to clock.

and …BUGS
The implementation of clock conflicts with the definition of the routine
found in the ANSI C Standard. The discrepancy will be transparent,
however, so long as programs which adhere to that Standard use the
difference in two invocations of clock for timing information, as
recommended by the Standard.

clock() probably not very portable?

Perhaps???
Really what you need to do is NOT to look for a high precision timer so you can work out the time per frame. BUT, simply use a timer which will give you “second accuracy” (and I mean accuracy, within a couple of msecs I expect (unless you really are looking for frame rates in excess of 1000 fps!!), and then count the number of frames displayed per second.

Simple implementation would be to have a your counter in your display function, and fire off a timer every second which read the count, and reset it, and lo, and behold, a reasonable estimate of FPS.

Mind you, I really don’t understand why people get so hung up on FPS? Surely, what you need is a FPS that gives you the smooth motion etc you require for your applications.

If you don’t get this, then you need to be looking at your code to see what can be optimised to give better perceived visual performance.

An FPS reading perhaps is only a “debug” function! or a benchmarking thing.

Rob.

I was talking about the function clock() (mind the parentheses), not a program. If an OS has an ANSI C library it also has the clock() function because this function is defined in ANSI C. So don’t say it ain’t portable.

Ok so your idea is this, count the number of frames displayed each second. I could do this with the ANSI C time functions. Then based on how many frames in that second adjust what needs adjusting. Right? Sounds liked a plan.

Lost,

Unfortunately, with only one second of granularity, the precision of your frame rate measurement goes DOWN as your frame rate becomes LOWER, because the clock may have ticked over to the next value at any time during the previous frame.

I e, the maximum error of your sample period is one entire frames’ worth of time (as long as each frame takes less than one second).

Personnaly I use QueryPerformanceCounter() under Windows and gettimeofday() under Linux.

IMO they’re the most precise timers. But they have to be manipulated correctly, otherwise you might easily lose the extra precision you gained. (e.g. don’t convert the 64bits value of QueryPerformanceCounter directly to a double.)

Simply counting fps is not enough. You need to measure the deltatime each frame to keep ingame physics and animation updating at a consistent rate. Distances traveled should track the elapsed time as much as possible, that’s pretty fundamental for any 3D game and it needs to be reasonably responsive to short term variations in frame rate due to instantaneous graphics load.

GPSnoopy,

QueryPerformanceCounter() (and any high-resolution timer dependent on the same data) has stability problems where it will sometimes take large jumps forward (or back) in time.

I’ve noticed this bug in pretty much any chipset in the last few years; I think the list in this Microsoft Knowledgebase link is not exhaustive:
http://support.microsoft.com/default.aspx?scid=KB;EN-US;Q274323&

My chipset is in that list (i440 BX). But I’ve never seen this problem, although I heard a lot about it.

I found out most timer problems I had were related to precision mistakes, not hardware bug.

We have a test size of around 50 machines, and we’d see it once a week or so.

thanks a lot for queryperf… info, i wasn’t aware of that i’m actually shocked that there’s a hw bug of this sort:

the “performance counter” is an age-old timer chip, available since earliest isa pc’s - if you’ve done hw timer interrupt programming under dos (pre-win9x days), it’s that same programmable timer at max 1.193MHz frequency (== 0x1234dd) on ports 0x40 - 0x43…

…which is also the reason why c runtime clock() function has crap resolution (18.2 ticks per second): it relies on bios which, on bootstrap, programs the timer to use the max clock divisor of 65336 (for minimum cpu load). 0x1234dd / 0x10000 = 18.2 clock ticks per second…

so there’s basically no good way to get timer services from c runtime (that’s suitable for games or midi sequencers), a bit like printer or networking services. my advice is to write a timer class to handle os specifics, then your code can use the class and you only need to port the timer class for your target platform.

[edit]i noticed that queryperf… calls incur an unreasonable cpu load, presumably cos you’re asking for info that ultimately has to be obtained from hardware ports, which is a virtualised ring 0 (kernel) resource. not that big a deal on its own, but coupled with that hw bug above makes me think of the rdtsc instruction. my problem with that opcode is (a) how can it be reliable under a multithreading/multitasking os: programs that use this instruction regularly misreport my cpu speed, and (b) for reasons i can’t recall right now, it involves using timeGetTime() over a 500ms interval to get cpu clocks per seconds… how can you trust a timing mechanism based on measuring with a different mechanism? help

[This message has been edited by mattc (edited 10-28-2002).]

Well, RDTSC works well with QueryPerformanceCounter() to get the CPU frequency (in less than 100 ms).

But then it’s the same problem, u use another timer to initialize yours.

Well my current setup takes the physics into account also. This is what I am planning on doing. I am going to set a variable called (TimeStepAverage).

Now my rendering code, and my physics code go hand in hand, Meaning that the physics code is fired each frame. ( I know thats expensive, but on todays hardware i think its doable). So what i do is this, I base my rendering and physics code on a set time of 60FPS.

So if the player move function is something like.

PlayerCtr = PlayerCtr + Move;

Move is based on a speed of fire at 60 times per second, of if i wanted him to move 60 feet in one second, the move would equal 1 foot. Got it?

Ok from there i simply add this

PlayerCtr = PlayerCtr + Move*TimeStepAverage

Well my current setup takes the physics into account also. This is what I am planning on doing. I am going to set a variable called (TimeStepAverage). Which is the number of frames fired that second, devided by 60 (the target number of frames).

Now my rendering code, and my physics code go hand in hand, Meaning that the physics code is fired each frame. ( I know thats expensive, but on todays hardware i think its doable). So what i do is this, I base my rendering and physics code on a set time of 60FPS.

So if the player move function is something like.

PlayerCtr = PlayerCtr + Move;

Move is based on a speed of fire at 60 times per second, of if i wanted him to move 60 feet in one second, the move would equal 1 foot. Got it?

Ok from there i simply add this

PlayerCtr = PlayerCtr + Move*TimeStepAverage;

So if the number of frames this second was 120, the move is scalled by 1/2. Making the physics just as accurate as at 60, but allowing the frame rate to be independant at the same time.

Also if the frame rate dropped to 30, the move would be multiplied by 2, making it still accurate, and non frame based. Does this sound feasible?

But this is the problem: you may get 61 FPS, or 1099.0738 FPS, or any other number of frames per second. You cannot possibly ensure that you will get whatever number of frames per second you so desire. That’s why knowing your exact FPS is important; it also just so happens to be a pretty good indicator of system performance.

u can also run a fixed time loop (this is what i do, ie all that crap happens at 30fps no thatdt + thatdt)
there was a TOTD/COTD from about a year ago on flipcode about it.

Originally posted by jwatte:

QueryPerformanceCounter() (and any high-resolution timer dependent on the same data) has stability problems where it will sometimes take large jumps forward (or back) in time.

Whats the alternitive for windows machines? I was using this and was unaware of this problem.

If you want a portable, high resolution timer class, have a look at glfwGetTime in GLFW . It tires to use the best timer available, in these orders:

Windows:

  1. RDTSC
  2. QueryPerformanceCounter
  3. GetTickCount

Unix/Linux:

  1. RDTSC (x86) or CLOCK_SGI_CYCLE (SGI stations)
  2. gettimeofday

There are still things to be resolved. For instance:

  • Disable RDTSC on “laptops” (i.e. CPUs with variable core frequencies)
  • Make CPU frequency determination more robust (for RDTSC, especially under Unix where we can’t SetPriorityClass( …, REALTIME_PRIORITY_CLASS ) )
  • Add support for Sun gethrtime (better than gettimeofday)
  • Handle 64-bit wrap arounds

[This message has been edited by marcus256 (edited 10-29-2002).]