Unstable rendering frame time.

Hi, All!

My app has unstable time of “frame rendering”:


14.137934
14.166943
14.118464
14.153534
20.779812
7.926734
14.098163
14.832049
20.1941
8.625633
14.053908
14.261358
21.711407
6.821295
14.076838
14.757764
14.218662
14.161402
14.388181
14.362341
14.173047
22.216944
6.789581
14.162337
13.978483
14.253107
14.257589
22.707865
6.632969
14.008824
14.481477
13.961395
14.093753
14.179714
14.334802
14.653909
14.16855
14.147377
14.190448
21.407948
6.755494
14.105653
14.225424
14.679986
14.242189
20.508795
7.796731
14.11932
14.375007

Several frames in a row look normal, but from time to time I have two frames: first expensive and second cheap.

It is ok, or I have to find reason of it?

Thanks for help.

[QUOTE=nimelord;1291477]
My app has unstable time of “frame rendering”:

Several frames in a row look normal, but from time to time I have two frames: first expensive and second cheap.

It is ok, or I have to find reason of it?[/QUOTE]

I think you should definitely dig into this and understand what’s going on here.

If these are frame times in milliseconds (ms), it looks somewhat like you’re running with a 70Hz VSync, but every so often one of your frames blows past its 1/70 sec (14.286 ms) frame budget and eats into the next frame. 70Hz is an odd VSync rate (60Hz is more common), but it could be an artifact of how you’re timing. The timing disparity between frames could also be either your app overrunning your frame budget with its internal processing and/or internal GL driver queuing and your workload not pipelining consistently.

First, how are you timing? You should measure the elapsed time between the same point in the frame for frames N and N+1 (e.g. right after SwapBuffers). For timing purposes only, I’d do this (on a desktop GPU): SwapBuffers(); glFinish(); Measure elapsed time since this point in last frame

The glFinish() tends to help prevent internal GL driver queueing (read-ahead) so you can get consistent, meaningful timing statistics for your entire frame.

It’s can also be useful to disable VSync when doing this timing test to see how much actual wall-clock time it actually takes to render your frames, with no idle waiting time.

In case you are using a programming language with built in garbage collection it could also be that you are generating lots of garbage and the spikes are moments when the garbage collector has to stop your application to do its work. Of course, this does not apply if you are using a language with explicit memory management.

Thank you guys!

This one is disabled. When point lights is off then I have 145 - 170 FPS.

Yes I’m using java.
I thought about garbage collector.
My app call new operator very rare on runtime.
Thus used memory volume increases very slowly.
And GC has no reason perform collecting so often (every several frames, you saw).

May be it some sort of micro collection actions?
Try to find out of it.

It does not have to be your code that generates the garbage. If you are using some kind of library, framework or binding that maybe that is generating garbage too (although a good one shouldnt). Use a profiler if you want to be sure about it.

I’m encountering the same problem. Were you able to solve this?

No, sorry.
I have no enough time to play with OpenGL.

But I have one thought. It may be related with management process of basic thread of application by OS.
You can try run your engine in separate thread and see result.

If it solve our problem, let me know, please.