I was wondering if there is a method to force an initial beginning of vsync in an attempt to get the monitors lined up every couple seconds. Kind of like a group of guys resetting their clocks together. Is this possible, or is vsync regulated purely by hardware?
It’s a hardware thing. That’s not to say that your GPU vendor couldn’t provide software control (e.g. an API) that allows you to “nudge the scan-out clocks” to keep them in sync using your own framelock sync method. Get with your GPU vendor and see. That said, I’m not aware of any public software APIs for NVidia’s latest GPUs which allow you to do this – this is the “value add” of Quadros and their G-sync capability.
Is the timing a function of the monitors refresh rate and refresh time?
The monitor’s supported refresh rate range functions to bound the GPU’s useful refresh rates. If you or your OS picks GPU scan-out frequencies for the GPU that are in your monitor’s supported range, it should work. Note that “refresh rate” range is just the “vertical sync frequency” range, and the monitor’s “horizontal sync frequency” range is similarly important here.
If I purchased monitors that had extremely fast refresh times (not rates) then could I minimize the issue caused by vsync?
And if your application could draw at those rates, yeah. That means less time for you to draw each frame though. At some point (dunno what that is), you’d stop being able to perceive that there is a difference in the frames, and you wouldn’t so much care anymore. But you might need more beefy hardware to do it.
How does the graphics card sync up with the refresh timing of the monitor?
You need to ask a hardware guy about that. However IIRC, it’s not like that. The GPU provides the clock signal in the scan-out video signal, and it’s the monitor’s job to “sync” to it, if it can. The GPU’s the boss. The monitor is playing catchup. IIRC from reading, this is how it works with DVI and HDMI at least. Don’t quote me on that though. I’m not a hardware guy.
However, how the GPU determines the valid sync rate ranges for your attached monitor is nowadays via video-cable wire protocols such as DCC and EDID (see links for details). These allow the GPU to ask the monitor “what it can do”, which the OS uses to determine what modes it allows you to configure the GPU for which are useful given your monitor. This is tons better than it used to be, where you had to tell the OS what make/model your monitor was, and it had to look this up into a database to get the scanout range information. And if it didn’t know it, you had to find out the scanout ranges for your specific monitor and tell the OS those so it could make reasonable decisions on mode selection (…or in some cases you actually provided your own full mode timing specifications – ugg!)
Finally if I wanted to control vsync myself through code, is there a signal that I can capture within openGL?
If you want to sync to vsync in your code, enable vsync and do this:
SwapBuffers()
glFinish()
// I should be pretty close to vsync here
Keep in mind though that GPU capabilities vary on the ability to sync to multiple attached monitors. Consult your vendor docs for details.
But if you want to “control” when vsync occurs, the GL API doesn’t provide this capability. I think this is your “force an initial beginning of vsync” question again, right?