Optimising OpenGL in Blender 2.50 - swap buffers

Hi there,

Some of you might have heard of Blender ( http://www.blender.org ), the cross-platform, open-source 3D modelling/animation/rendering/real-time game engine application. Blender uses a 100% OpenGL front-end for drawing everything, from 2D UI controls, to the actual 3D viewport rendering.

For the next major 2.50 release, the front-end OpenGL based UI is getting a major re-write, as well as the back-end, in order to allow for a highly interactive, and fully customisable, infrastructure for future developments.

For the new UI, the developers want to implement OpenGL in a very clean and efficient way from the start.

There is currently an issue with the UI fickering on some graphics cards ( in this case, an ATI and Integrated Intel graphics card on Windows XP ).

A fix was created for the 2.50 branch, based on the previous verions, which effectively draws everything twice ( not very efficient, but it does work ) - they are actively looking for a more efficient solution is found to implement.

You can view the svn patch here, that fixes the issue…

http://lists.blender.org/pipermail/bf-blender-cvs/2009-January/017657.html

A bit more info, as well as a link to info on why the older version of Blender used this “fix” method of drawing, can be veiwed here…

http://lists.blender.org/pipermail/bf-taskforce25/2009-January/000271.html

If any you do have any thoughts or suggestions, this is the main mailing list for active developers - I’m sure they would enjoy finding out more about the problem, and how to solve it.

http://lists.blender.org/mailman/listinfo/bf-committers

Alternatively, you could post some ideas regarding what the issue might be, and any potential solutions to try out, in this thread, and I’ll pass the link on to the relevant Blender developers.

Many thanks in advance for anyone who can give any assitance…
Mal

o The existing design relies in a few but critical places on
drawing to the front buffer, which is not commonly used in
OpenGL and in some cases causes large slowdowns (ATI cards).
A new system should never draw in the front buffer.

Indeed. And with modern composited desktops (Vista Aero, X Compiz), it behaves even worse.

I can’t view the svn patch here, server seem too slow.

The idea is to draw everything on the backbuffer then swap when it has to be actually seem.
If you want to easily update the UI without redrawing the 10 million triangle model displayed on view, you have to draw UI and views rendering to separate framebuffers (FBO is prefered, then pbuffer, then as last resort use only the backbuffer and have to do glCopyTexSubImage to save current framebuffer to a texture).
Then composite by blending multiple texture as 2D elements.

Hi all,

To add a more technical version of Mal’s description:

In Blender 2.50 we indeed dropped all front buffer drawing, and will also implement internal “compositing” for live updates, menus, and more fancy stuff. We will investigate usage of pbuffers or FBO for this, but will have to keep in mind that not all cards support this well.

Anyhoo, that’s not the issue. What we cannot solve yet is how to cope with the swap-exchange or swap-copy issue. The latter is much easier to manage for redrawing, swap-exchange requires keeping track of bad back-buffers after a swap, requiring extra redraws (Not really “draw everything twice” :slight_smile:

Question; is there either a reliable way to enforce swap-copy behaviour, or at least a reliable way to get the information from the system which swap method is used?

Thanks,

-Ton-

The idea is not to draw on front- and backbuffer. You draw to offscreen render buffers (fbo, pbuffer). Let’s say one for the scene and one for the UI overlay.

Then the only thing you have to do on every frame is

  • if scene changed, redraw scene buffer
  • if UI changed, redraw UI buffer
  • copy* the scene buffer to the backbuffer
  • blend* with the UI overlay buffer
  • call SwapBuffers

This way you don’t have to deal with swap-copy or swap-exchange at all.

*) Copy and blend both buffers in one step by drawing them to the backbuffer as 2D textures. That’s what ZbuffeR meant.

CatDog

Question; is there either a reliable way to enforce swap-copy behaviour, or at least a reliable way to get the information from the system which swap method is used?

Unfortunately I never heard about a reliable way.
Not so reliable would be draw stuff, swap, draw different stuff, swap, then read the backbuffer to check the pixel value and try to guess what swap method was used…

I have seen nvidia drivers switch from one type of swap to the other depending on the number of GL windows used on screen, whether they were overlayed or not, fullscreen or not, etc.

For cards without FBO/pbuffers support, as I said you can use glCopyTexSubImage2D. It is probably a bit slower, as an extra copy is involved, but this stays on the hardware so no worries. The main problem is parts of the window covered by another window are ‘undefined’, even on the backbuffer (so moving the render window on top of blender main window can leave some artifacts).

BTW Blender is a great package, I would be glad to help more on the OpenGL front.

Thanks for the answers!

The lazy dev inside probably hoped for a swap-copy solution, but it seems we should go for off-screen render right away. Are there public reviews/reports of successfully implementing such a triple-buffer method for cross platform programs? In my experience OpenGL features always give unexpected surprises for certain hardware/driver/OS combos…

Such a public report would be nice indeed, but I am not aware of anything like that.

Maybe it is possible for the Blender team to develop a small demonstrator with minimal features, such as some pulldown menus, multiple views, a big mesh in edit mode to verify display refresh performance ?
Posting on these forums a link to Windows+Mac executable + easily compilable source for linux would allow a wide range of OpenGL developers to report early feedback.
That would become a nice “public report” on how to do modern application GUIs within OpenGL.

Slightly off-topic, the only problem I had (and still have) with the Blender redrawing UI, is the separate render window.
During rendering, it does not refresh after being covered, and during looong renders with ambient occlusion and such, it is a pain to wait until the end of render whenever another window went over it. Decoupling the render and the refresh would be great, and quite easy with the off-screen render route.

> Maybe it is possible for the Blender team to develop a small
> demonstrator with minimal features, such as some pulldown menus, > multiple views, a big mesh in edit mode to verify display
> refresh performance ?

You can download the latest builds of various versions of Blender here…

http://www.graphicall.org/builds/index.php

If you search for builds beginning with “Blender 2.5”, you should be able to find Windows / OSX / Linux builds from the last week or so ( the OS logo is shown as part of the name ).

Mal