transfering depth data into / out of FBOs

A limitation of FBOs is that you can’t share buffers between FBO and onscreen buffers. With color data this is no problem, because I can just draw a textured quad and I have my copy in the other buffer.

But what if I need the same depth data in my onscreen and offscreen buffers? For example when doing deferred shading I’d like to have the depth buffer that’s produced in the initial pass to be copied into the framebuffer to help with culling of light source fragments…

I can only think of three solutions to this problem:

  1. use two depth only passes, one for each buffer
  2. render the depth only pass on screen and use a texture instead of a renderbuffer, then copy the data with glCopyTexSubImage
  3. render the depth only pass to a texture and use a quad with a simple depth replacing fragment program to fill the default depth buffer and/or the renderbuffer

I’d like to avoid solution 1, this would mean an additional geometry pass, and that’s somehow defeating the idea of deferred shading…

Solution 2 has the problem that I’d have to use a depth texture instead of a depth renderbuffer for all my FBO rendering. Someone mentioned in another thread that this may slow rendering down.

And finally solution 3 uses depth replace, and AFAIK using depth replace deactivates early Z not only as long as the shader is active but until the next buffer swap, so performance is even worse than solution 2…

Does anyone have a better idea?

Why would you need depth buffer on-screen at all? I think you should do all of you rendering in FBOs, and your real window should only have color planes! This way, when you are done, just simply copy/render your color buffer into the window…

That won’t be a good idea imo … for several reasons:

  1. FBO might be slower than on-screen buffer depending on driver implementation. But almost all drivers guarantee that normal on-screen buffers are placed in as ideal locations as possible!

  2. Copying data in the end can cause performance hit.

  3. You can create on-screen buffer of “virually any” resolution whereas in case of FBO, you will most probably have to live with power of two surface because there is poor or no support for rectangular FBO surfaces. So you would either be wasting extra memory (make it power of two greater than the screen size), or sacrificing quality (smaller size than screen).

  4. A square surface (be it larger or smaller) when copied to a rectangular surface can cause precision problems in some areas due to filtering of samples which can cause precision artifacts.

Regards,
Zul

Mhh… I’d have to test the performance of this.

I can imagine enough situations where I have to do postprocessing effects anyways, for example when doing HDR rendering, so it’s not really a problem.

Although it’s not going to be easy to integrate this into my engine transparently… Especially the abstraction of different RTT methods :wink: