Tradeoffs in off-screen rendering

I’ve got a question that I suspect has a very simple answer. Hopefully someone out there can help me.

I’m working on a problem where it takes a tremendous amount of work to draw my scene. Because of this, I can only afford to do it once. What I want to do is render into an off-screen buffer. When I get an expose event (I’m using GLX and Motif), I would like to copy the image from my off-screen buffer into the visible buffer.

I’m almost certain that what I’m doing now is wrong. Currently, I render into an auxiliary buffer, then copy pixels from there when I get my expose event. The idea was that the auxiliary buffer would be “safe” from getting overwritten by other windows. But I’m pretty sure that is not true. If I understand the GLX documentation correctly, then the color buffer and all its associated ancillary buffers are left in an undefined state after they are occluded by another window. Is that correct?

Assuming that the above approach is wrong, I’m looking for the right way to do this. I’ve looked into Pixmaps and Pbuffers. If I understand correctly, Pixmaps are not guaranteed to use hardware acceleration, which is very important to me. Pbuffers seem to fix that problem, but I think they can also be overwritten at any time. If I specify the GLX_PRESERVED_CONTENTS flag, then this won’t happen, but my pixels may be dumped to system memory. (I can’t tell: will they go to the server’s memory or the client’s memory? Hopefully the former…)

Is my assessment of Pixmaps vs. Pbuffers correct? If so, then it seems like I want to use Pbuffers. But while I was browsing this message board, I came across a thread about GL_EXT_framebuffer_objects. These sound useful for off-screen rendering as well. How do they compare to Pixmaps and Pbuffers? And are they vulnerable to having their contents overwritten asynchronously?

In short, what is the best way to solve my problem?

Thanks,
–Ryan

framebuffer_objects (in short FBO) are the modern way to do off-screen rendering. I am pretty sure you are guaranteed that they won’t be overwritten unless you ask for it, destroy the GL context, or something like that.
Even more, you can use it as a texture (ie. full screen quad) to quickly fill your screen with the scene, it is very efficient.

Yep. Even my dusty old NV40 advertises framebuffer_blit and framebuffer_multisample (2.1.2/169.12).

Anything from NV40 up, and possibly earlier cards, support framebuffer_blit and framebuffer_multisample. I haven’t observed any major slowdowns using a GeForce6600, so it appears to be hardware accelerated in the NV40 series.

The last time I checked, though, my Radeon X1900XT doesn’t support those extensions - I don’t know if that’s a hardware limit, or just ATI’s notoriously bad OGL drivers. I don’t know about the Radeon HD series, haven’t had a chance to play with them yet.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.