Advice on implementing a timer

I have an object consisting of some gluCylinders, and both the number of cylinders as the cylinders themselves grow over time. This is all called from glutIdleFunc. So basically glutIdleFunc calles a method that updates the object (lets it grow in size and complexity) and then it calls a method that draws the object to the screen.

Now as the object becomes more complex, it takes more time to update and draw the object. I have a double representing the age, and at the moment I just add 0.01 everytime the object is processed. I want the increment to be dependent on the time passed, so if say 1/100th of a second is passed, increment with 0.01, but if 7/100th of a second is passed, the age should be incremented with 0.07.

I’ve been trying stuff with time_t, keeping the time now and compare with that the next time and take the difference, but I’m not sure if it really works. I was wondering how this is usually done, and if people with more experience can give some hints?

Say for instance that your application runs at 100 frames per second (FPS). In this case each frame is separated by 0.01 seconds. So what you can do is figure out how many FPS your application is running at, and use a timestep that is 1/FPS. Be warned though that this may lead to very large timesteps as your FPS drops, as the object becomes more complex. You may need to set some boundaries, to make sure the timestep stays within reasonable limits. What these boundaries should be are very application dependent.

So they way you find out how many FPS you are getting is:

Every time you render, increment a counter that stores the number of frames that have been rendered.

Check if a second has passed since you started counting frames. If so, your FPS is the number of frames you counted and you grab that value and reset the counter. If not, keep counting.

SenorSnor,
Are you trying to use time() function to measure the elapsed time? (I am assumming it because you mentioned time_t type.)

I think that clock() provides much higher resolution than time() does. On my system, clock() gives 15 ms resolution. (smallest difference that clock() can detect)

As you mentioned, a common method is using the elapsed time of each frame to update the current animation states.

The boost libraries have a cross-platform time-class, if you are worried about such issues. I think it guarantees microsecond precision on most known platforms.

www.boost.org

clock() and time() are not the same thing - the first measures CPU time while the second measures real time. Real time is probably want you want if you want a constant simulation speed.

On UNIX, gettimeofday() will give you microsecond precision (but probably not accuracy). For portable code, there is a glutGet call that gives you ms precision which is probably enough (can’t remember the same of the parameter to glutGet).

tnx all for the help! portable code would be nice, guess I’ll try the glutGet. I found the param name btw, it’s GLUT_ELAPSED_TIME.