PDA

View Full Version : Maintaining FPS



zedus
11-30-2000, 09:53 AM
Well, my demo is finally done (if there is such thing ;-) ) And I've managed to find out my FPS - but How do I maintain FPS (I mean if it runs TOO fast some times ) I thought about making some sort of a delay procedure but I'm not sure if that's the way and how is it done could somebody here tell me?
and what FPS would be Ideal (thinking I still have to draw lot's of monsters, walls,particles and stuff...) In other words - what's the minimum FPS I can mainatain that still would look good ?

kacke
11-30-2000, 10:20 AM
I'm using 35 but I also think a lower value will do it. 25-30 I guess

Succinct
11-30-2000, 10:59 AM
Yer average, hardcore, FPS (first person shooter, not frames per second) gamer will expect no less than 60 fps on a good card, or so I've been told on www.heat.net (http://www.heat.net) by numerous players.

now, i was in your boat around 3 weeks ago, trying to slow down my app to exactly 60 fps, but someone told me it was absurd to do so... ?

well, eventually i realized they're right.

what i suggest is to take advantage of your super frame rate.

make everything based on real time. I'm sure you've got some kind of counter going somewhere to keep track of your animation, am i right? instead of doing, say, Counter += 10, do CurrentTime = GetTime( ) - StartTime;

here's an example i use in my code:


unsigned StartTime = GetTime( ); // in ms
unsigned TimeLength = 1000; // in ms
unsigned cTime = 0; // current time in ms
while( cTime < TimeLength )
{
float t = float( cTime )/TimeLength; // fraction showing how "complete" the counter is

glPushMatrix( );
glRotatef( t*360,0,0,1 );

DrawObject( );
glPopMatrix( );

cTime = GetTime - StartTime;
}

when someone first explained this to me, i just thought it was kinda cool, the fact that the rotation will take exactly 1 second.

what it really breaks down to is the fact that no matter what your frame rate is, it will take exactly one second. that means even if it's running at 8 fps, or 800 fps.
the only difference will be the level temporal continuity. 800 fps will look like water!, 8 will look like rocks

i dunno, i'm just a real-time freak, and i recommend using it, especially cuz it's easy!

hope i helped

-succinct

zed
11-30-2000, 11:26 AM
u might wanna take a scuzzy here at the winswapinterval extension it shows the normal way ofmaintaining a steedy? animation http://members.nbci.com/myBollux

11-30-2000, 11:33 AM
If you turn ON wait-for-vertical-blank on
swapping, then you will gain two things:

1) Your game will not run too fast (because
it will be limited to the refresh rate)
2) Your game will not tear.

Hardcore g4m3rZ who turn off wait-for-retrace
to see a bigger number on their FPS counter
get just that -- a bigger number. The do not
get any better gaming experience (unless
the game is slow enough that it can't keep
up with the refresh.)

El Jefe
11-30-2000, 03:25 PM
I agree with Succint....I personally would not recommend designing a game such that it has to rely on a certain framerate. Nor would I really go to any lengths to try and force a certain framerate.

Most games, Quake3..UT, they probably just let the game run as fast as it can ( to a certain extent ) and use some form of interpolation based on a game time counter.

Blaze
12-01-2000, 02:34 AM
You're completely wrong bgl!

You do indeed get tearing but you sometimes also get your framerate cut in half. Imagine for instance the refreshrate is 60hz and a framerate just below 60fps. This would mean a framerate of 30fps because half the frames got dropped while waiting for the refresh.

And there is a very big difference between 30fps and a close to 60fps with tearing.

And i've played alot of quake2, so i know exactly how this feels in real life.

zedus
12-01-2000, 04:29 AM
First of all 10X for your replies.
But still, none of you answered my question:
My problem is NOT that my engine always runs too fast imagine this situation:
this is a 3d shooter your walking and in front of you you jump 10 new creatures. so now there are 10 creatures + Walls + Particles stuff to Draw and you'll recieve 30 FPS.
But since you're so afraid of those creatures you deside to turn around and run so you turn around and suddenly there are no creatures to draw and the FPS increases to 180 FPS.
How do I make my engine run always in 30 FPS
?

Moz
12-01-2000, 05:10 AM
I did this in a demo program, but unfortunately, I don't have the code here.
Anyway, I'll try to explain what I did :

Before drawing each frame, I reset a time counter :

long start_time = GetTickCount();

Then, When I finish drawing the frame, I check the time it took to draw it :

long elapsed = GetTickCount() - start_time;

Then from the value of elapsed, I calculate the amplitudes of the tranformations of the next frame so that the movements of the objects appear to be consistents to the viewer whatever FPS you get.

Note that GetTickCount is a Windows function (that won't work on any other platform, but there may be an equivalents). GetTickCount returns the value of the HW counter of your PC (a 32 bits integer representing milliseconds) that starts at 0 when you start your PC. The acuracy of the GetTickCount function may vary depending on tour HW, e.g. on my portable PC (a 3 year-old P133 MMX) it is accurate to the millisecond. On computers at my university (PIII 500), it's accurate to 10ms http://www.opengl.org/discussion_boards/ubb/confused.gif.

Hope this helps.

Moz

Edit: >How do I make my engine run always in 30 FPS
With the method I describe, the FPS rate CAN CHANGE (it does in actual games), but the timing of the animation IS consistent.

[This message has been edited by Moz (edited 12-01-2000).]

pleopard
12-01-2000, 06:11 AM
I use a real time clock enclosed in a thread of its own. The main engine doesn't do any updates unless it gets an update notification from the clock thread. The clock thread continually runs in a loop. In each loop iteration, it computes the elapsed time since the last update notification. If the resulting time is greater than or equal to it's preset interval (1/frequency) then it issues an update message to the rendering engine.

Pseudocode ...

class MThread
{
public:
MThread(MProcess* proc);
void start(); // Eventually calls m_Proc->run()
// ...
private:
MProcess* m_Proc;
};

class MProcess
{
public
MProcess();
virtual void run() = 0;
};

class MClockable
{
public:
virtual void respondTo(const MSimulationClock&) = 0;
};

class MSimulationClock : public MProcess
{
private:
MClockable& m_Target;
public:
MSimulationClock(MClockable& target) : m_Target(target) {}
void run()
{
while(!terminate())
{
double t = m_Timer.elapsedTime();
if (t>0 m_UpdateInterval)
{
m_Timer.reset();
m_Target.respondTo(*this);
}
}
}
};

Good luck!
Paul Leopard

PS Remember to make all OpenGL calls from within the rendering thread (the one that sets up the rendering context).

pleopard
12-01-2000, 06:16 AM
Oops... it is important that the clock thread doesn't gobble up all the time slices so insert a simple wait state. You can do this by estimating the amount of time that must expire before the next update and taking roughly 90% of that time and sleeping....

Replace MSimulationClock::run() above with something like this...

void MSimulationClock::run()
{
while(!terminate())
{
double t = m_Timer.elapsedTime();
if (t>0 m_UpdateInterval)
{
m_Timer.reset();
m_Target.respondTo(*this);
}
else
{
double delT = m_UpdateInterval-t;
unsigned long msecs = (unsigned long)(900.0*delT);
Sleep(msecs);
}
}
}
};

12-01-2000, 08:10 AM
zedus: turning on vsync _will_ throttle your
game to not run faster than the monitor.
Throttling your game back more than that is
harder, but could be done with the
appropriate delay loop or the swap control
extension.

Blaze: if you read my entire post, you would
see the words:


unless the game is slow enough that it
can't keep up with the refresh

As an aside, I sure hope that most people run
their monitors at higher than 60 Hz refresh
rate these days.

Michael Steinberg
12-01-2000, 08:32 AM
As long you don't want to use network multiplay, where you can kill the server with too much data, it is unnecessary to limit the frame rate I think. Another case would be if you programmed a win32 application other than a realtime one, then you could use the window messages though. Any other case should be ok if you make the translations not frame dependant but time dependant.

Moz
12-01-2000, 08:41 AM
Originally posted by Michael Steinberg:
As long you don't want to use network multiplay, where you can kill the server with too much data, it is unnecessary to limit the frame rate I think. Another case would be if you programmed a win32 application other than a realtime one, then you could use the window messages though. Any other case should be ok if you make the translations not frame dependant but time dependant.
Yes, as I said in my post, most games don't even try to control the famerate, but are time dependant.
Paul's solution is good to limit the framerate, but in a game, that's usually not what you want to do, you want to have the highest framerate possible all the time.
Refer to my previous post to see how I handle it.

Moz

El Jefe
12-01-2000, 09:27 AM
Zedus, I guess I don't understand what the problem is with 30 fps on one hand..then turning around and getting 90 fps ( or whatever ). I'm sure similar things happen in Quake3, UT, whatever....it seems reasonable--lots of processing == slow frame rate....less tri's, processing, whatever == faster framerate.

Then only case of capping the frame rate in Quake3 ( that I know of ), is a cvar COM_MAXFPS...and usually, I just set this to 90 or so.

pleopard
12-01-2000, 11:45 AM
Just a clarification here ...

The solution I gave is usefull in situations where you want a stable framerate (where the FPS varies very little). You can crank up the desired framerate as high as you like and it will attempt to stabilize at your desired rate. If your box and code can handle it and you want a 600 FPS rate it should still work.

If you dont care about a stable frame rate or if you have no interest in limiting frame rate then the point is moot ... don't use it.

However, if you really do want to limit the frame rate or stabilize it, this solution works well. I don't write games, I write real-time and near-real-time simulations. Having things happen when they are supposed to is of paramount importance here.

Moz
12-01-2000, 12:01 PM
What I mean is that I don't think that limiting the framerate is a good solution to get a constant time scale (for simulation as for games).
This works if your HW is fast enough but if you limit FPS to 30 and can only get 15, then what's the point in doing this ? Also why limit to 30, if in some scenes with few polys you could get 60 ?
If you render your application time dependent rather than frame dependant, things happen when they are supposed to (and FPS can vary as much as they want to).
I don't say that it is easy (you need to change the way you calculate the next frame and it's often more complicated) but I think it is the way to go.
What Succint posted is an example of how to have your application time-dependant.

Moz

zedus
12-02-2000, 07:46 AM
Yes, making my engine depand on time rather then on FPS would be alot smarter. I thought on doing my engine depand on time because I thought that the eye should see no difference from 40 FPS to 100 FPS but I checked this and we do.
10x to you all!

Michael Steinberg
12-02-2000, 12:34 PM
But I don't think that we would see any difference between 60 frames and 100. The problem is not the animation, the problem is the feedback of input to graphical output. Matt somewhen stated that it is possible that 3 frames are in the pipeline sometimes.
At 60 frames per second that is 1/20 secs. This time will pass until the user will see the reaction of the world.

Matt does your driver set a limit of frames in the pipeline? Maybe, one shouldn't use the actual time when calculating the time dependant stuff, but the time, when the frame will actually be rendered ie:

time+=frames_in_pipeline*1/fps
when time is measured in seconds.

Obviously it would only work when the frame rate stays constant over some time.

Succinct
12-04-2000, 12:58 AM
Originally posted by pleopard:
I don't write games, I write real-time and near-real-time simulations. Having things happen when they are supposed to is of paramount importance here.

using time based animation i think would make something happen closer to the actual time than using frame based... especially because time-based animation has a temporal resolution limited only by availiable hardware, whereas frame based is accurate only to 1/fps.

GetClockTicks is accurate to @1000ms, give or take 10 or 25, but QueryPerformanceTimer is accurate to 1/1193182nd of a second on the computer i'm on right now. Frame based animation is accurate only to the next 1/60th of a second.

the cool thing about time based is that all you need to do is have that 0<t<1 factor, and it doesn't matter what the frame rate is, all of the animation and events will happen exactly the right time, even if your frame rate drops to like, 3fps.

you can look outside a door and see 5 gajillion polys (none culled) that all need to go through the pipeline. That big missle that's coming straight for you will not change speed. you'll just see it move less frequently, but cover more distance per move.

i'd love to post a picture(an event over time graph) but that'd be overkill http://www.opengl.org/discussion_boards/ubb/wink.gif

all i'm saying is as the fps get faster, all that will happen is the animation gets smoother, and everything happens even closer to the exact time it's suppossed to.

when the fps drop the only noticable difference w/ a time based engine is the choppiness between frames, not a complete slowdown of the system. i'm curious how you deal w/ pipeline overload using fixed frame rates in real-time. Makes me think of old 8-bit nintendo when you have to many sprites running around at the same time.


i dunno, sorry to be annoying, but i'm not used to seeing words like "paramount" being thrown about http://www.opengl.org/discussion_boards/ubb/wink.gif

cheers, everyone http://www.opengl.org/discussion_boards/ubb/biggrin.gif

-Succinct

zedus
12-04-2000, 07:17 AM
I undestand how to make it time based, but what do I do with the keyboard input ?

Succinct
12-04-2000, 09:30 AM
Hunnh? /:|

Where'd that come from?

I dunno, what r u doing w/ it?

it shouldn't need any changing, i don't think...

Michael Steinberg
12-04-2000, 12:51 PM
Well, you'll have to track the time a certain key has been pressed during the last frame. It should be obvious that the world can only be transformed before a frame, not inbetween.
That thing cost me quite a time to think me into it... Oh, I still use WM_KEYUP/DOWN message to make that stuff. For my realtime engine I wonder if there arrive any WM_KEYDOWN messages while a frame, since it won't do a DispatchMessage. I think that is the problem I currently have... I don't want to use the GetAsyncKeyState things, since I'd have to iterate through all keys every 1/1000 second to do it correctly.
Maybe it would be best to create a hook function for the application, that does not post the WM_KEYDOWN message but send it to the application. Or are they already sent?

Succinct
12-04-2000, 03:59 PM
Hey, mike, i agree. screw direct input! i'm going w/ WM_KEYDOWN! http://www.opengl.org/discussion_boards/ubb/wink.gif

seriously, though, i don't work on my own computer, so i can't install dx.

i pretty much just put all of the wm_keyx's in a ll along w/ the time and go through it every frame.

wrote a whole wrapper for it.

nice to see someone else in my boat (for possibly different reasons http://www.opengl.org/discussion_boards/ubb/wink.gif )

i've never missed a key

Michael Steinberg
12-05-2000, 07:59 AM
But what about that multiple-frames-in-pipeline problem?

pleopard
12-06-2000, 01:07 PM
Originally posted by Succinct:
[B] using time based animation i think would make something happen closer to the actual time than using frame based... especially because time-based animation has a temporal resolution limited only by availiable hardware, whereas frame based is accurate only to 1/fps.

GetClockTicks is accurate to @1000ms, give or take 10 or 25, but QueryPerformanceTimer is accurate to 1/1193182nd of a second on the computer i'm on right now. Frame based animation is accurate only to the next 1/60th of a second.


Actually I use an event calendar for the kinematics simulation that is time based. I use QueryPerformanceCounter to drive the timer so that I can make very small adjustments in my clock whenever needed. The rendering engine is frame based in that it simply takes a picture of the current kinematic state at a constant frame rate.

zedus
12-08-2000, 07:13 AM
Well, All the camera and player movement looks great and I checked it on computer stronger than mine and it looks great (and also on slower computers - it doesn't look great but it's at least timed correctly )
but the only problem left is the animation interpolation let's say I have to cycle between frames 1-20 and I want that 1/30 of a second will pass between each frame and ofcourse there should be creation of subframes in order to make the ani smooth.
what thecniques are usually used?

pleopard
12-08-2000, 08:37 AM
As I understand the question I would just set the frame rate limiter to 30 hz and render each of the frames 1-20. I believe that my understanding of your question is flawed, could you clarify?

zedus
12-08-2000, 11:23 AM
inspired by the answers of people here I DIDN'T create a fps limiter but created a TIME BASED engine. so sometimes it renders 180 FPS and sometimes 40 FPS. It's very easy to implement it on movement and rotation but my problem (which I already kinda solved) was with animating the quake 2 models my engine loads and creating subframes to the model frames in order to make it smooth and also to make it time based.

12-08-2000, 02:01 PM
The solution to animating MD2 models is to
set a "cycle time" for each animation cycle,
and then linearly interpolate between the
two frames that are closest. I e suppose your
cycle is N frames starting at F, and the
duration of the entire cycle is D. Then your
animattion vertex is calculated like so for
time T (0 <= T < D)

int ix = (int)floor((float)N*T/D);
float terp = (float)N*(T/D-(float)ix);
float omterp = 1.0-terp;
int ix2 = ix+1; if (ix2 == N) ix2 = 0;

for (int j=0; j<sizeof(outMesh)/sizeof(outMesh[0]); j++)
{
outMesh[j].x = inMeshes[ix+F][j].x*terp+
inMeshes[ix2+F][j].x*omterp;
outMesh[j].y = inMeshes[ix+F][j].y*terp+
inMeshes[ix2+F][j].y*omterp;
outMesh[j].z = inMeshes[ix+F][j].z*terp+
inMeshes[ix2+F][j].z*omterp;
}

You can also get fancier and use better-than-
linear interpolation, such as cubic hermite
or (for you signal processing buffs) some
form of sinc-based location resampler, but
the linear interpolation is typically good
enough; especially for a fast shooter.

zedus
12-09-2000, 09:13 AM
well, I didn't quite understand can you explain that again ? or maybe somebody else?
but 10x anyway!

Succinct
12-12-2000, 06:06 AM
Originally posted by pleopard:
The rendering engine is frame based in that it simply takes a picture of the current kinematic state at a constant frame rate.

wow, what a nifty idea... so the actual physics is based on time, but the renderer just takes snapshots when the screen needs refreshed... kewl

very good idea

*applauds Pleopard*

i assume this only works when the rendering is done in a separate thread, so i'm putting my rendering into a separate thread just to implement this. i'm sick of using 100 of my cpu's time rendering 120 fps...

thx for the insite - succinct

[This message has been edited by Succinct (edited 12-12-2000).]

Michael Steinberg
12-12-2000, 06:57 AM
I think it doesn't need any multiple threads. The idea is, to parameterize the movement of the the world, so you can give the time and you know where the objects are. Then, before any frame, you get the time, move the objects according to the time, and see this as a snapshot. If you have a constant frame rate, you should also be able to increment the time a bit, so your rendering in the middle of the screen is up to date. Yeah, sounds a bit confusing.

Magnus W
12-12-2000, 09:00 PM
Michael S> I think that would cause a problem for physics/simulations, since this would force them to use a time step equal to the one of the rendering functions. Since (all?) simulations solve differential equations, using a large timestep would cause instability/inaccuracy issues. Using a separate, internal sample rate for the real-time stuff should probably work though...

Michael Steinberg
12-19-2000, 08:35 AM
Oh, well, I don't really get what you mean.
You mean simulations that can't be calculated time dependant because they're so complex that it is an iterative attempt with fixed time steps?