FBO and Depth Buffering

Can i use the “default” depth buffer in case i want depth buffering for a particular render target and don’t want to create a depth buffer for each render target that i use?

If I can then please tell me.

If I can’t then please tell me the rationale (if you know of any) because in my humble opinion it is absolutely absurd on ARB’s WG part not to make it a part of FBO specification. Having to create a depth buffer for each and every FBO that i create is really a pain in the neck, not to mention memory overhead for no reason :mad: !!!

You can’t use the default depth buffer for FBOs, but you can reuse depth buffers between FBO render targets.

While we’re at it, what’s the recommended way if I want the depth buffer semantically shared between on- and offscreen buffers?

Of course I could use a depth texture for the FBO, render a depth only pass and then render the depth texture to the framebuffer using a simple depth replace shader, but I think on some cards this disables early z until the next buffer swap…

Another idea that came to my mind is doing it the other way round, that is a depth only pass to the screen and using glCopyTexSubImage to transfer to the FBO. Seems faster than the first idea, but I haven’t tested either way… Perhaps next week when I finished my exams :wink:

Anyways, is there any disadvantage or performance penalty in using textures instead of renderbuffers? Because with both these methods I have to use a depth texture instead of a depth renderbuffer.

While we’re at it, what’s the recommended way if I want the depth buffer semantically shared between on- and offscreen buffers?
You don’t. On-screen buffers are not accessible through the FBO interface.

Anyways, is there any disadvantage or performance penalty in using textures instead of renderbuffers?
There could be. I would suggest using a renderbuffer in all circumstances where it is possible. IE, unless you’re actually going to use it as a texture (or do a copy to visible buffer operation), use a renderbuffer.

Using a texture could prove to be expensive, but i can’t really say for sure until i know the absolute implementation of render buffer, which can vary for different hardware.

But what i am really pissed off about is requiring a depth buffer with each FBO. What a waste of time and extra effort :mad: ! A lot of OpenGL extension documentation refers to a particular behavior as being derived from DirectX, maybe they should have looked at DirectX render targets before finalizing this extension :slight_smile: .

Btw, is there any chance that a change can be made in this extension before it gets approved as an ARB extension?

But what i am really pissed off about is requiring a depth buffer with each FBO.
You don’t need an extra depth buffer for each FBO. You only need one extra depth buffer that is shared for each FBO…

The reason why you can’t share between on and offscreen buffers is the pixel ownership test. When using FBOs, you’re guaranteed that the pixel ownership test always passes. So if you try to share the onscreen depth buffer with a FBO, you may run into the problem that some pixels just don’t exist because of overlapping windows and so on, so the pixel ownership test can’t succeed…

You don’t. On-screen buffers are not accessible through the FBO interface.
That’s what I meant with “semantically” sharing the buffer. What if I really need the same content in both buffers?

But we’re drifting off topic here… I’m opening a new thread:
http://www.opengl.org/discussion_boards/…t=013561#000000