Yet another Framerate question

I was working on a 2D game Engine using SDL and openGL.For calculating frame rate I use the following code.

startTime = SDL_GetTicks();
	frames=0;
	while(!doneWithGame) {
		currState->update(apg);
		glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
		 glLoadIdentity();
		currState->render(apg);
		frames++;
		currentTime = SDL_GetTicks();
		if(currentTime - startTime > 1000) { 
		FPS = frames; 
		frames = 0;
		startTime = currentTime;
		}
		SDL_GL_SwapBuffers();
		std::cout<<"The FPS is :"<<FPS<<std::endl;
	}

I hope this is correct in the first place.So my question is

I use an NVidia Geforce 9500GT with a quad core processor clocking @ 2.66Ghz.and 3Gb RAM. I get an average of 60FPS when rendering a blank screen.couple of years ago I was using Slick2D a game engine using LWJGL for making small games.while using that i used to get 1500 2000 FPS as average for a blank screen.(even my games used to render with average 200 frames :P).So my question is.why is this difference??

am I calculating FPS wrongly??
or is Slick2D uses a wrong FPS calculation??
or my coding is soooooo poor??
or whatever… just please explain this difference.

thanks in advance!!! :slight_smile:

and the same old answer on the question regarding framerate limit: disable v-sync, you can do it either directly, using OpenGL calls like this , or through SDL via SDL_GL_SetAttribute( SDL_GL_SWAP_CONTROL, 0 ); or something like that.

thanks bro!! I used SDL_GL_SetAttribute( SDL_GL_SWAP_CONTROL, 0 ); and now im getting 500FPS.any other tips on increasing frame rate?? especially on slow systems?

P.S: I hope your answer means that my FPS calculation is correct.:slight_smile:

Also I have one doubt.If I use the OpenGL method u told then I lose portablity as the method varies between windows and Linux.but if I use the SDL_GL_SWAP_CONTROL i guess it works on both linux and windows.(havent tried yet though).So I need to know whether both produce the same results.If yes then Im happy with SDL method.If not then please tell me what to do??

i do not use SDL. but i think should work the same. test it yourself. although if you are using SDL 1.3+, you probably should be able to just call “SDL_CreateRenderer” without “SDL_RENDERER_PRESENTVSYNC” flag on your application initialization. and it won’t enable it by default.

your algorithm may work, but it’s not very precise. you can use fraps. it will show fps overlay over your application’s window.
also, calling to std::cout or any ostream function in your rendering cycle is not really a good idea. especially, every frame.

and do you really need to update your logics more than 60 times per second? i’d move it to a separate thread, limited to 30-60 FPS. but it might bring your application to a new complexity level because of synchronization.

many thanks!! that helped