Triple buffering

I am new in this forum and I apollogize if there is any other forum open on this subject that I have not been able to discover.

I have a problem with triple buffering mode enabled. Well, a few problems, and I am frustrated not to find any relevant documentation to solve my problems.
We develop an application that updates portions of the view to get a fast animation of some processes that are calculated on a pixel base. During the animation a big portion of the image is not changed and we try to avoid regenerating the entire frame.

My first question is how triple buffering is actually implemented.

I have noted that when triple buffering is enabled, pixel formats based on SWAP_COPY do not work, even if the entire BACK buffer is copied to the FRONT buffer. Animation gets lots of artifacts near the limits of every update. That makes sense if the render bufffer and the FRONT buffer are phisically exchanged after the SwapBuffers() call.
The only reliable way I’ve got it working is in SWAP_EXCHANGE mode, saving the entire bitmap contents before the SwapBuffers() call and then restoring it back.

Where can some decent documentation be found relative to how triple buffering copies/exchanges buffers ? What SWAP_EXCHANGE vs. SWAP_COPY Matters to this ???

The second thing is how can we detect that triple buffering is enabled, and how turn it on/off ?

Third is how save/restore the BACK buffer without using the glDrawBuffers. The thing is how to avoid all the 3D operations and jsut copy buffer colors and depths.

I understand that these issues are not common to gaming programmers because they entirely rebuild every frame. But I can’t imagine I am the only one concerned about pixel-based operations that involve small areas in the view.

Thanks in advance

Josep

You could try using an FBO to render offscreen, then blit it to the screen, or draw a fullscreen textured quad.

The disadvantage is an extra copy or texture lookup each frame, but it should be faster than the two copies your SWAP_EXCHANGE method would require. You still only have to redraw the changed portions of your scene.

Thanks Paladinofkaos

I’ll try FBOs. That makes a lot of sense, someone mentioned pbuffers before. I not use textures for this kind of rendering, so that will not add any problem.

In any event is there anywhere to look for documentation on triple buffering and how, if possible to detect it and turn it on/off ?

Thanks

Josep

Well, I’d swear I could see a lots of replies yesterday that no longer appear in this post. No idea what happens in this forum.

OK, as far as I can see my MSVC++ v6 does not show any hints about how to use the PBOs. My card reports extensions GL_ARB_pixel_buffer_object and GL_EXT_pixel_buffer_object, but the documentation then reverts to samples like GenBuffers(), BindBuffer(), BufferData()… and I don’t think I am linking to the right libraries.

The problem copying from/to buffers is that the GlDrawPixels involves a lot of 3D operations, and on some graphic cards this is very slow.

Is the only solution to do a glReadPixels and then copy it directly to the screen using a Windows call ???

Josep

OK, do you mean I shuld do a glReadPixels and the a SetDIBits onto the screen ?

That recalls my former question about where to find reliable information about how triple buffering works. Because the most sensible thing to do here is a SwapBuffers in SWAP_COPY mode, which simply does not work. Selecting a pixel format for SWAP_COPY when triple buffering is enabled behaves as a pixel format with SWAP_EXCHANGE.

THanks

Well, the glReadPixels+SetDIBitsToDevice work well when triple buffering is enabled. So instead of SwapBuffers I do the Read+Set and the screen is updated

The bitmap format needs to be BI_RGB, and the gl format is GL_BGR_EXT, it was many many years I did not use SetDIBitsToDevice…

Thanks a lot, I wonder how this SetDIBitsToDevice can solve all this triple buffering issues and all the settings for SwapBuffers cant.

Josep

For what it’s worth, you can usually enable and disable triple buffering from a vendor control panel. No way that I know of to do this at the application level in OpenGL.

Triple buffering is just an extension to double buffering - solves certain latency issues while increasing memory demands. A bittersweet buffer treat.

If only parts of the framebuffer is updated every frame, could Kinetix’ buffer region help (assuming it’s present, and even used nowadays)?

Another option could be to simply have a full-screen sized texture that works as the “background”, and then draw the updates to whatever smaller viewport is needed. Perhaps even throw in a stencil, just for good measure?

But for your three questions, modus is correct - triple-buffering just isn’t part of OpenGL.

Tamlin

I haven’t heard of Kinetix buffers, what is this ?

I have deliberately avoided OpenGL functions like glDrawPixels because it involves lots of 3D math to stretch pixels unnecessarily, so I am not for textures to simply dump pixels on the screen. Some of our clients use the software on industrial PCs that are as rugged as slow. In any event thanks a lot for the suggestions.

Triple buffering would be fine if it would not screw up the SWAP_COPY mode. I am still trying to get some documentation on how it interacts with the pixel format mode, but I don’t find that anywhere.

For reference here you have my piece of code that substitutes the SwapBuffers(). Maybe this helps somebodyelse:

int PSGI_OGLbitmap::SwapBitmapDataToScreen(int w)
{
int width,height;
if (m_SwapBitmap==NULL) {
m_SwapBitmap = malloc((m_iWidth+4)
(m_iHeight+4)*sizeof(long));
}
if (m_SwapBitmap!=NULL) {
width = (w[2]-w[0] + 3)&0xfffffffc;
height = w[3]-w[1];
memset(&(bmp_data.bmiHeader),0,sizeof(BITMAPINFOHEADER));
bmp_data.bmiHeader.biSize = sizeof(BITMAPINFOHEADER);
bmp_data.bmiHeader.biWidth = width;
bmp_data.bmiHeader.biHeight = height;
bmp_data.bmiHeader.biPlanes = 1;
bmp_data.bmiHeader.biBitCount = 24;
bmp_data.bmiHeader.biCompression = BI_RGB;
glReadBuffer(GL_BACK);
glReadPixels(w[0],w[1],width,height,GL_BGR_EXT,GL_UNSIGNED_BYTE,m_SwapBitmap);
SetDIBitsToDevice(m_hdc,w[0],m_iHeight-w[3],width,height,0,0,0,height,m_SwapBitmap,&bmp_data,DIB_RGB_COLORS);
}

return m_SwapBitmap!=NULL;

}

Josep

Nevermind, I’m too old. Have a look at this instead.

Triple buffering would be fine if it would not screw up the SWAP_COPY mode.
So why even use it? No, seriously.
Double-buffering in a 2D sense is drawing stuff to an off-screen bitmap, then simply blitting it to the screen (to avoid slicing).
Triple-buffering in this context is rendering stuff to an off-screen buffer, then (effectively) blitting it to another buffer, and finally blitting it onto the screen.

But the path this has taken, this isn’t really an OpenGL question at all anymore.

That would be stupid. Normally, in double or tripple or whatever buffering, SWAP_EXCHANGE is used.

WGL_ARB_buffer_region can be considered dead. It is much simpler and portable to use a FBO.
http://www.opengl.org/wiki/index.php/GL_EXT_framebuffer_object

Tamlin

Thanks a lot for the pointer to the WGL_ARB_buffer_region, that seems what I was looking for to save portions of the image.

I know what SWAP_COPY was supposed to do. The problem is that when triple buffering is enabled, SWAP_COPY seems to no longer be a bitmap copy, it behaves as a bitmap exchange, and totally screws up the animations based on portions of the image.

Seems that SetDIBitsToDevice solved my problems.

Josep