Best approach to avoid frame dropping?

I’m working on generating a periodic pattern using OpenGL, and the requirement is that all frames have to be rendered and displayed on a 240Hz screen without any drop/ skip/ tear. The pattern are really simple, so rendering time is fast and will not be an issue.

I tried using VSync to do this, (mentioned in an earlier post ), but it’s not reliable because effectively I only have a buffer of size 1, and if some process in the CPU goes on it can cause frame dropping. The way that I understand it, with glfwSwapInterval set to 1, my main loop will be clocked by the VSync interrupt, so it will iterate 240 times if my monitor refresh rate is 240Hz.

What I want to do is to send more than 240 frames per second, and store the extra frames in a buffer, so that if the CPU fails to send a frame in time, the buffer can ensure the pattern continue without glitch.

I’m wondering if there’s a way to combine buffering and Vsync, so that the pattern can appear at precisely 240 Hz without tearing/ frame dropping.

[QUOTE=tobielolness;1292531]I’m working on generating a periodic pattern using OpenGL, and the requirement is that all frames have to be rendered and displayed on a 240Hz screen without any drop/ skip/ tear. The pattern are really simple, so rendering time is fast and will not be an issue.

I tried using VSync to do this, (mentioned in an earlier post ), but it’s not reliable because effectively I only have a buffer of size 1, and if some process in the CPU goes on it can cause frame dropping. The way that I understand it, with glfwSwapInterval set to 1, my main loop will be clocked by the VSync interrupt…[/QUOTE]

No, in general this isn’t correct.

Your application renders frames. After they’re rendered, they go into a buffer “swap chain”. At the tail end of that swap chain, image buffers are dequeued and used by the hardware to scan out each image.

What VSync ON (SwapInterval 1) does is it prevents the driver – at the tail end of that swap chain – from changing what image buffer is currently being scanned out to the monitor/display in the middle of that image. This prevents tearing artifacts, where you’d see a piece of one image on top of the display and a pieces of one or more other frames on the bottom.

At the other end of the buffer swap chain is your application, rendering frames that are inserted into this buffer. In between your app and this swap chain is the OpenGL driver, which can in some cases queue up multiple frames of GL commands ahead of the frame being rendered right now on the GPU.

So, your process might end up running at the VSync rate. But that’s only because the pipeline of rendered frames in the swap chain and queued commands in the driver is completely backed up into your render loop. It’s not because there’s no pipelining between your app and what frames are displayed.

Now there are settings in your driver with which you can control this pipelining. I think you already mentioned you are using an NVidia GTX 1070. It’ll help in this discussion if you provide what GPU driver, and OS you are working on here. For instance, NVidia’s GL driver provides some control on both the number of image buffers in the swap chain (as well as whether the buffers in the chain are accessed as a FIFO or not) and how many frames of GL commands it will read (buffer) ahead of the current frame.

There are also things you can do in your application to limit this pipelining. But above, I’m just talking about the default behavior.

What I want to do is to send more than 240 frames per second, and store the extra frames in a buffer, so that if the CPU fails to send a frame in time, the buffer can ensure the pattern continue without glitch.

That’s where this pipelining in the GL driver and the swap chain can help you. The tradeoff is you have to be able to deal with inconsistent per-frame latencies on your draw thread. But for your use case, you may not care about that.

I’m wondering if there’s a way to combine buffering and Vsync, so that the pattern can appear at precisely 240 Hz without tearing/ frame dropping.

Given sufficient performance, sure. Start at 60Hz. Get that solid. Then rinse/repeat for 120Hz and then 240Hz. Also make sure to choose a load (at least initially) that your GPU should be able to easily fit within a 4.16ms frame time.

Thank you so much for the detailed explanation. There are a lot of under-the-hood work by OpenGL and the Nvidia driver that I’m unaware of.

I’m using GeForce Game Ready Driver version 399.24 on Windows 10. I have been changing nvidia driver settings through the nvidia control panel. Is there a way to do this programmatically?

Check out NVAPI.

In particular, NvApiDriverSettings.h defines and Driver Settings (DRS) APIs like NvAPI_DRS_GetSetting() and NvAPI_DRS_SetSetting().

So I changed the driver settings of the Power Management Mode from “Optimal Power” to “Prefer Maximum Performance” and I got consistent behavior now (In my test program to color the screen background from black-white-black for every frame, the transition happens reliably without any glitch).

I’d have to do more tests before jumping to a conclusion, but it looks good for now. No idea why this solved it though. Thank you so much!

[QUOTE=tobielolness;1292540]So I changed the driver settings of the Power Management Mode from “Optimal Power” to “Prefer Maximum Performance” and I got consistent behavior now (In my test program to color the screen background from black-white-black for every frame, the transition happens reliably without any glitch).

I’d have to do more tests before jumping to a conclusion, but it looks good for now. No idea why this solved it though. Thank you so much![/QUOTE]

Good catch! Unless this is a laptop on battery power, Prefer Maximum Performance is definitely what you want as (IIRC) it prevents the driver from throttling down the GPU clocks unless it absolutely needs to because thermal limits have been hit and you’re in danger of burning up your GPU.

Yeah, I’ve tripped over this before. In fact, I’ve seen cases where the NVidia driver’s power management is set for optimize for power consumption where a GL app would cycle back and forth between hitting 60Hz solidly and dropping down to 30Hz or 20Hz for a bit. This would repeat several times within a few seconds, and repeat forever. Measuring draw thread frame time would reveal that things just got slower for a while and then sped back up. Once you flip off the power optimization mode and go back to prefer max performance, that’d always fix it.