Adjusting vsync through code and general vsync question

I am working on a system that will involve splitting a world across multiple monitors and trying to sync these monitors up the best I can.

Since I am only rendering 2d, and since these machines are all located adjacent to each other on a high speed network connection, I am attempting to use a brute force approach involving a master sending sync signals to all the slaves. Sync signal would tell slaves to swap buffers once render was ready. This sync signal can hopefully approach 30-60 fps and then everything is fine.

I have been doing some reading and I believe if I have vsync enabled then there could potentially be an issue. Despite the speed of the network connection, if one machine is pausing due to a screen refresh, it could slow down the screen refreshes. I am aware of hardware based solutions such as the Quadro link from NVIDIA to solve this. I was wondering if there is a method to force an initial beginning of vsync in an attempt to get the monitors lined up every couple seconds. Kind of like a group of guys resetting their clocks together. Is this possible, or is vsync regulated purely by hardware?

Also the more I think about vsync the less I really know.
Is the timing a function of the monitors refresh rate and refresh time?
If I purchased monitors that had extremely fast refresh times (not rates) then could I minimize the issue caused by vsync?
How does the graphics card sync up with the refresh timing of the monitor?
Finally if I wanted to control vsync myself through code, is there a signal that I can capture within openGL?

Thanks!
Lucas

Hi Ivick,
Check WGL_EXT_swap_control for windows and GLX_EXT_swap_control for Linux. You have some methods to change the swap interval giving a new interval or to (dis)activate VSync with specific values.

I’m no guru on this but it sounds like you might be confusing what your app is doing with what GPU’s scan-out of the video signal is doing.

When you turn on vsync, you’re saying to the GPU, I don’t care when I “call” swapbuffers, I don’t want you to actually “do” the final swapbuffers on the GPU until you are between screen refreshes in your video signal scan-out. That is, it’ll never do it in the middle of scanning out a screenfull of data. This is critical to eliminate “tearing” effects that would otherwise occur, where you display part of the previously drawn framebuffer (time t) and a part of the next drawn framebuffer (time t+1) at the same time. Now that’s an issue with just with one GPU output plugged into one display device.

You have a similar issue across spatial boundaries between adjacent monitors being driven by different GPUs. You always want each monitor to be displaying a “full” image generated for the “same” point in time at the “same” time. Otherwise you may have tearing issues along monitor boundaries (due to adjacent image data being from different times). If your monitor display surfaces aren’t physically adjacent (e.g. there’s a band between them), or your app’s redraw time is long, or your quality requirements aren’t super high, then you may not care about this. But if you do, then in general you need to keep the GPUs displaying a full image generated for the exact same point in time at all times. And as you mentioned one way is to synchronize the GPU’s scan-out clocks so they stay “in sync” with each other. I believe that is what NVidia’s G-Sync solution does, and from their docs:

If the displays are attached to different GPUs, the only way to synchronize stereo across the displays is with a G-Sync device, which is only supported by certain Quadro cards. See Chapter 30 for details.

FRAME LOCK: Frame Lock involves the use of hardware to synchronize the frames on each display in a connected system. When graphics and video are displayed across multiple monitors, frame locked systems help maintain image continuity
to create a virtual canvas. Frame lock is especially critical for stereo viewing, where the left and right fields must be in sync across all displays.

In short, to enable genlock means to sync to an external signal. To enable frame lock means to sync 2 or more display devices to a signal generated internally by the hardware, and to use both means to sync 2 or more display devices to an external signal.

More on your other questions in a subsequent post.

I was wondering if there is a method to force an initial beginning of vsync in an attempt to get the monitors lined up every couple seconds. Kind of like a group of guys resetting their clocks together. Is this possible, or is vsync regulated purely by hardware?

It’s a hardware thing. That’s not to say that your GPU vendor couldn’t provide software control (e.g. an API) that allows you to “nudge the scan-out clocks” to keep them in sync using your own framelock sync method. Get with your GPU vendor and see. That said, I’m not aware of any public software APIs for NVidia’s latest GPUs which allow you to do this – this is the “value add” of Quadros and their G-sync capability.

Is the timing a function of the monitors refresh rate and refresh time?

The monitor’s supported refresh rate range functions to bound the GPU’s useful refresh rates. If you or your OS picks GPU scan-out frequencies for the GPU that are in your monitor’s supported range, it should work. Note that “refresh rate” range is just the “vertical sync frequency” range, and the monitor’s “horizontal sync frequency” range is similarly important here.

If I purchased monitors that had extremely fast refresh times (not rates) then could I minimize the issue caused by vsync?

And if your application could draw at those rates, yeah. That means less time for you to draw each frame though. At some point (dunno what that is), you’d stop being able to perceive that there is a difference in the frames, and you wouldn’t so much care anymore. But you might need more beefy hardware to do it.

How does the graphics card sync up with the refresh timing of the monitor?

You need to ask a hardware guy about that. However IIRC, it’s not like that. The GPU provides the clock signal in the scan-out video signal, and it’s the monitor’s job to “sync” to it, if it can. The GPU’s the boss. The monitor is playing catchup. IIRC from reading, this is how it works with DVI and HDMI at least. Don’t quote me on that though. I’m not a hardware guy.

However, how the GPU determines the valid sync rate ranges for your attached monitor is nowadays via video-cable wire protocols such as DCC and EDID (see links for details). These allow the GPU to ask the monitor “what it can do”, which the OS uses to determine what modes it allows you to configure the GPU for which are useful given your monitor. This is tons better than it used to be, where you had to tell the OS what make/model your monitor was, and it had to look this up into a database to get the scanout range information. And if it didn’t know it, you had to find out the scanout ranges for your specific monitor and tell the OS those so it could make reasonable decisions on mode selection (…or in some cases you actually provided your own full mode timing specifications – ugg!)

Finally if I wanted to control vsync myself through code, is there a signal that I can capture within openGL?

If you want to sync to vsync in your code, enable vsync and do this:

SwapBuffers()
glFinish()
// I should be pretty close to vsync here

Keep in mind though that GPU capabilities vary on the ability to sync to multiple attached monitors. Consult your vendor docs for details.

But if you want to “control” when vsync occurs, the GL API doesn’t provide this capability. I think this is your “force an initial beginning of vsync” question again, right?

Correct you’ve already explained this.

I can not thank you enough! There’s lots of high level information on all of this but now I really understand the inner workings.
Long term I think the Quadro Sync is the right type of solution for what I’m trying to do.

thanks again!