Any way to share a back buffer across windows in OpenGL?

I would like to find out if there is any way to share a single back buffer that is used to render to multiple independent windows. As far as I can tell, with wgl you have to first set a pixel format on the window before using OpenGL, which presumably allocates a back buffer. That seems to imply that every window used for output MUST have its OWN back buffer to display any OpenGL content. That seems wastefull if you have a large (presumably unconstrained) number of windows.

I am considering conversion of a Direct3D-driven windowed application to OpenGL. The application creates a single buffer, and uses it to draw graphics for all of the windows (including the little menu popups). The buffer is reused in round-robin manner to paint all of the windows, one at a time. This approach allows any number of windows to be created without using extra video memory, while still keeping the benefits of double buffering. In Direct 3D this is possible because Present call has the following signature:

HRESULT Present(
CONST RECT *pSourceRect,
CONST RECT *pDestRect,
HWND hDestWindowOverride,
CONST RGNDATA *pDirtyRegion
);

By modifying “hDestWindowOverride” we can direct the output to any window/location.

With OpenGL we seem to be stuck with a combination of:

void glViewport(GLint x, GLint y, GLsizei width, GLsizei height);
BOOL SwapBuffers(HDC hdc);

This does not seem to provide any way to use the same buffer for multiple window output. Any suggestions?

Also, is there any way to specify the size of the back buffer explicitly, or must it always be the same as the size of the window?

Originally posted by mikeant:
[b]I would like to find out if there is any way to share a single back buffer that is used to render to multiple independent windows. As far as I can tell, with wgl you have to first set a pixel format on the window before using OpenGL, which presumably allocates a back buffer. That seems to imply that every window used for output MUST have its OWN back buffer to display any OpenGL content. That seems wastefull if you have a large (presumably unconstrained) number of windows.
[…]
This does not seem to provide any way to use the same buffer for multiple window output. Any suggestions?

Also, is there any way to specify the size of the back buffer explicitly, or must it always be the same as the size of the window?[/b]
There’s a common optimization that ICDs use to minimize the memory footprint allocated to backbuffers. It’s called “unified backbuffer” and what they do is to allocate a single desktop-size backbuffer shared across all the windows, so no matter how many windows you have, you will always consume the same memory for the backbuffer. Obviously this approach cannot be used in “compositing environments” like OSX, where two overlapping regions can be visible at the same time (i.e. translucency).

The reason why each (non-occluded) window must have its own backbuffer is because you can call SwapBuffers even if there’s no GL context attached to the HDC.
In OpenGL the pixelbuffers (the color & ancilliary buffers) are detached from the OpenGL contexts and you can mix and match pixelbuffers with OpenGL contexts at will (as long as both share the same pixelformat and/or are driven by the same driver).

Also, your method of using only one window-sized backbuffer for all the OpenGL contexts, forces you to regenerate the OpenGL rendering on every repaint, which may be expensive.

There’s a common optimization that ICDs use to minimize the memory footprint allocated to backbuffers. It’s called “unified backbuffer” and what they do is to allocate a single desktop-size backbuffer shared across all the windows, so no matter how many windows you have, you will always consume the same memory for the backbuffer. Obviously this approach cannot be used in “compositing environments” like OSX, where two overlapping regions can be visible at the same time (i.e. translucency).
[i]

  • Are you aware of which manufacturers currently utilize the optimization? Is it essentially industry-standard at this point? Also is there any documentation, or web sites that discuss it in depth?
  • It seems like it should be possible to reuse back-buffers even in translucent environments. In those cases, you would simply need one extra buffer for temporary blending.
    [/i]

In our case, the 3D API is used rather unconventionally to render user 2D interfaces. So, what we would like to avoid is allocation of buffer for every window (which would constrain the number of windows that can be created based on video memory!), as well as performance overhead of actually allocating temporary buffers when little popup windows are created/destroyed.

The reason why each (non-occluded) window must have its own backbuffer is because you can call SwapBuffers even if there’s no GL context attached to the HDC.
[i]

  • And what good would calling SwapBuffers actually do, if you are not using GL? Can you actually double-buffer GDI with this function? Windows docs seem surprizingly fuzzy about what exactly SetPixelFormat does and how buffers are managed. [/i]

So the bottom line is:
- In WGL there is ABSOLUTELY NO way to either specify the size of the back buffer, or reuse buffers among separate windows. For example, it is not possible to allocated a buffer of 320x240 size and use it for a window of size 640x480.

Is this correct, or are there perhaps some extensions that could be of help?

Originally posted by mikeant:
[b]

  • Are you aware of which manufacturers currently utilize the optimization?
    Is it essentially industry-standard at this point? Also is there any documentation, or web sites that discuss it in depth?
    [/b]
    I know NVIDIA uses it (UBB is NVIDIA’s naming) I believe it used to be a selling point differenciator between Quadro’s and geForces (the geForce driver would turn it off).

[b]

  • It seems like it should be possible to reuse back-buffers even in translucent environments. In those cases, you would simply need one extra buffer for temporary blending.
    [/b]
    I don’t think so. Normally the compositor is asynchronous to each window’s rendering, so it needs to access each window’s backbuffer to perform the composition without forcing a redraw.
    Also, the composition may not be as simple as a one pix to one pix translucency, you may have zooming, rotation, deformation, etc going on.

[b]

[quote] The reason why each (non-occluded) window must have its own backbuffer is because you can call SwapBuffers even if there’s no GL context attached to the HDC.

  • And what good would calling SwapBuffers actually do, if you are not using GL? Can you actually double-buffer GDI with this function? Windows docs seem surprizingly fuzzy about what exactly SetPixelFormat does and how buffers are managed.
    [/b][/QUOTE]I didn’t say that you are not using GL, I said with no context attached:

Is this correct, or are there perhaps some extensions that could be of help?[/b]
Like I said, not having a backbuffer per double-buffered non-occluded window would violate the WGL spec.
There are extensions to perform fast copies, save-unders, etc (ARB_buffer_region comes to mind, the new FramebufferObject, some apps also use glCopyPixels to the frontbuffer to simulate swapbuffers, other use swapbuffer_hint, etc), but there’s no extension for doing the kind of thing you want to.

You could try to use a Framebuffer Object of the size of your desktop, create all your windows single-buffered (it’s very likely that the frontbuffer is always “unified”), then you perform all your rendering to the framebuffer object and copy to each window’s frontbuffer using glCopyPixels (this should minimize flickering).
Each window will still need a frontbuffer of its own, but you won’t be able to overcome that (unless you use your own fake windowing system on a desktop-sized window).

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.