Implementing framerate limit

I am not sure which Forum I should post this.

What I really try to do is to control the framerate / update rate of my opengl application (with out using force vsync) to 60 times per second.

(I working on 3d fighting game that required high precision collision detect )

this is the code to demonstrate the ploblem (I use GLFW for timer and initialize)

  

//-------declaration--------------
double savedTime = 0;
double updateTimer = 0;
const double updateInterval = 1.0/60.0;


//------in framerate control function----------
double currentTime =  glfwGetTime();
double elapseTime = currentTime - savedTime;
savedTime = currentTime;
	
updateTimer += elapseTime;

if(updateTimer >= updateInterval){
	
	update(updateInterval);  //update game logic /keyframe animation
	render(); //draw scene and swap buffer
	updateTimer = 0;

}

which is very smooth when monitor refresh-rate is set to 60 Hz.
But when I try it with others refresh-rate like 75 , 85 or 100 Hz the animation become very jerky (frame rate is stll 60).

I tried to change control variable (position , speed , quartenion) of my application from float to double and use “double” version of opengl call (glTranslated …etc…) but the motion still jerky.

Can some one suggest me how to implemnet frame limit mechanism that work well with all refresh-rate ?

Well, if your screen is being updated more often than your game, you will see jerkiness. When you change screen refresh rate, you must do the same changes in your code.

I don’t know of any methods that work with all framerates. However, if you make sure to update your game often (updateInterval = 1/100) you can almost be sure that you will be updating the game more often than the screen.

@thinks : well that will not really solve the problem. If the display refresh is not an integer multiple of engine refresh, you will have jerkiness.

There are two options : interpolation or extrapolation.
In both cases, you need to desynchronize the engine step (60fps) and the drawing step (at display rate if your card can follow). Use separate threads.
And when drawing, you have to know how much time passed since latest engine step.

interpolation between previous and current engine step :
PRO : guaranteed to be smooth and not overshoot
CON : response time increased a bit (you can be late up to 1 engine refresh)

extrapolation from previous and current engine step, adding real display time :
PRO : response time is “perfect”
CON : you may overshoot (a bit of interpenetration of objects, some jerkiness might appear for sudden changes of motion)

Of course if you feel advanturous you may try higher order interpolation/extrapolation, involving more engine steps from the past.

everyone , thank for your answer.

increasing resolution of update interval is not working when refreshrate is lower than desired framelimit.The game become slowmotion. (but very smooth :slight_smile: )

I will try to do interpolation/extrapolation method as suggest by ZBuffer but I never tried to do multithread opengl before.

This is not multithread opengl : 1 thread issues opengl commands, 1 other thread do engine calculations (no opengl here).

Agree with Zbuffer.

Could you not just do the physics calculations with a certain time-step, and then treat the rendering as a sampling of the state of the physics? Perhaps this is exactly what you meant, but that way there is neither interpolation or extrapolation.

Say physics has dt = 1/600. So after ten steps of physics, a time of 1/60 has elapsed. This would trigger a rendering. This assumes of course that the rendering itself is cheap in comparison with the physics…

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.