fixed framerate

Hi.

If I want my OpenGL graphics to run at 60 frames per second (nothing more nothing less), how could I achieve this?

I’m only experimenting at the moment so no worries about the computer being too slow.

Any help/tip appreciated.

Go and see that one display-function-call (ie: the function in which you render the whole scene) takes 1/60 seconds. If it’s faster, go into an idle function (you may also do a semi-infinit loop… ) until your display function takes up 1/60 seconds. You may also benchmark all what going on around your rendereing an substract it from 1/60 to get more precise. However, I doubt that WM_TIMER messages are the correct things to do.

Buu-u-t if it takes LONGER than 1/60th of a second? then you’re screwed <smiles sweetly> because you can’t delay to get down to 60 fps if you can’t even go beyond it.

So… what do you do? You could look at progressive refinement. Break your scene into “parts” and before you continue the next section, check to see if you’re in danger of expiring yrou time slice. (well, its slightly more complicated than that, but, you get the idea?)

cheers
John

Originally posted by john:
Buu-u-t if it takes LONGER than 1/60th of a second? then you’re screwed <smiles sweetly> because you can’t delay to get down to 60 fps if you can’t even go beyond it.

The original poster did say:
“I’m only experimenting at the moment so no worries about the computer being too slow.”

To get no pipeline afterburner, you should also be using glFinish. This way you exactly know that there are no remaining 20 frames going to be rendered on the hardware I think.

Uhmmm, john, lordkronos, I don’t completely understand your postings. What do you mean?

Well, I think I got you now. I’ve found a solution to render always at 60 frames per seconds, even if you can’t go beyond it. It uses the onboard op code preprocessor of the via chipset forming a bursted 3d pipeline that interacts with my network card which in turn renders the images. The graphics card is a bit too slow for networking, but I’ve good server and client side prediction with the god module just beneath my hard disk.
But the problem was that I only wanted exactly 40 frames per seconds, so I needed to use the normal way of rendering and reduce some geometry. Well, however it works. If anyone needs some code…

Arrrrghhhhhh. AAAAAaaaaaaahhhhhhh.

[This message has been edited by Michael Steinberg (edited 01-30-2001).]

Following is a copy of a post I made in http://www.opengl.org/discussion_boards/ubb/Forum3/HTML/001445.html

I use a real time clock enclosed in a thread of its own. The main engine doesn’t do any updates unless it gets an update notification from the clock thread. The clock thread continually runs in a loop. In each loop iteration, it computes the elapsed time since the last update notification. If the resulting time is greater than or equal to it’s preset interval (1/frequency) then it issues an update message to the rendering engine.
Pseudocode …

class MThread
{
public:
MThread(MProcess* proc);
void start(); // Eventually calls m_Proc->run()
// …
private:
MProcess* m_Proc;
};

class MProcess
{
public
MProcess();
virtual void run() = 0;
};

class MClockable
{
public:
virtual void respondTo(const MSimulationClock&) = 0;
};

class MSimulationClock : public MProcess
{
private:
MClockable& m_Target;
public:
MSimulationClock(MClockable& target) : m_Target(target) {}

void MSimulationClock::run()
{
while(!terminate())
{
double t = m_Timer.elapsedTime();
if (t>m_UpdateInterval)
{
m_Timer.reset();
m_Target.respondTo(this);
}
else
{
double delT = m_UpdateInterval-t;
unsigned long msecs = (unsigned long)(900.0
delT);
Sleep(msecs);
}
}
}

};

Good luck!
Paul Leopard

PS Remember to make all OpenGL calls from within the rendering thread (the one that sets up the rendering context).

[This message has been edited by pleopard (edited 01-30-2001).]

[This message has been edited by pleopard (edited 01-30-2001).]

>> If I want my OpenGL graphics to run at 60 frames per second (nothing more nothing less), how could I achieve this?

yes, i saw that “fast enough” thing, too. nothing more, NOTHING less, implies it goes both ways.

cheers,
John

Also, I find the functions

QueryPerformanceTimer()
QueryPerformanceFrequency()

really useful in accuratly timing things since it counts CPU clock ticks.