PDA

View Full Version : read FBO depth values in fragment shader? [solved]



karx11erx
11-23-2007, 09:20 AM
How do I read depth values from an FBO's depth buffer in a fragment shader? Afaik the FBO's depth buffer contains depth comparison values, not depth values.

k_szczech
11-23-2007, 10:58 AM
1. You can't read any value from FBO in fragment shader. You can only read from textures.
2. You cannot read from teture that is bound to FBO that you currently render to.
3. Depth buffer contains depth values not depth comparison results.
4. Reading from depth texture with shadow2D function will return comparison result.

karx11erx
11-24-2007, 06:31 PM
Just tell me why it works for me then. I have bound a texture as GL_DEPTH_ATTACHMENT to my FBO and I can read that texture in my fragment shader while the FBO is bound, and I get depth values from that texture.

Maybe that's not according to FBO and whatever specs, but it works on two different ATI cards (Radeon 9800 and X1900 XT).

If there is however a proper way to read depth values from an FBO's depth texture while the FBO is bound and is a render target, I'd like to know how. Copying the depth buffer is slow.

Komat
11-24-2007, 07:37 PM
Just tell me why it works for me then. I have bound a texture as GL_DEPTH_ATTACHMENT to my FBO and I can read that texture in my fragment shader while the FBO is bound, and I get depth values from that texture.

It might appear to work in some situations and on some hw however you have no guarantee that it will not break with different driver, hw or rendering state. The specification explicitly states that the values of fragments and values of texture samples done for them are undefined in this case.

The hw and the driver does not enforce synchronization between texture caches and buffer writes so in bad case you might end with sampling mix of old and new values. Additionally there is possibility of hw which is physically unable to sample from framebuffer during rendering (e.g. tile based renderer) so you might end with completely bogus data. Even implementation which would turn all such fragments to red would be perfectly valid.



Maybe that's not according to FBO and whatever specs, but it works on two different ATI cards (Radeon 9800 and X1900 XT).

You should avoid anything which is marked as undefined by the specification even if it appears to work. In the special case of a program which is designed to run on specific fixed configuration, you might get away with relying on the undefined behavior. In any other situation you should avoid it like a plague.

I think that programs relying on undefined behavior are the most common cause of "Bad driver X, it breaks program Y" messages in various forums which in turn forces the driver writers to waste time creating application specific hacks to compensate for that.

karx11erx
11-25-2007, 04:18 PM
Well, I am not claiming it should or would always work, but you claimed it cannot work, while it does.

What I am really interested in is a method where I can read depth information from the current Z-Buffer and use it in a fragment shader for writing to the current draw buffer, or the fastest possible workaround.

Currently I am having 3 render passes: Opaque geometry and objects, transparent geometry and objects, coronas. The coronas should of course go the the same draw buffer.

Komat
11-25-2007, 06:11 PM
If your use case can be described by following pseudocode


DrawSomeGeometry()
CaptureDepthBufferIntoTexture()
DrawGeometryUtilizingCapturedDepthTexture()

then you might try to output the depth into ordinary texture during the DrawSomeGeometry() and use that texture latter during the DrawGeometryUtilizingCapturedDepthTexture(). This way you will have two copies of the depth. One in the depth buffer for hw tests and one in ordinary texture for use by shader.

For the output you can use multiple draw buffers, if supported by your card (I never used them so I do not know how fast that is), or if the DrawSomeGeometry() contains pass which currently does not modify color buffer (e.g. depth only pass) you can write the depth here (loosing possible double speed depth fill).

karx11erx
11-26-2007, 07:10 AM
"CaptureDepthBufferIntoTexure" is what I want to avoid as it just kills the framerate.

Dark Photon
11-26-2007, 07:51 AM
"CaptureDepthBufferIntoTexure" is what I want to avoid as it just kills the framerate.
That's one approach. I'm not sure how fast it is to glCopyTexImage2D( ...GL_DEPTH_COMPONENT##...) with a MSAA/SSAA/CSAA framebuffer though. Try it and see.

Another approach is to do a seperate draw pass to render only the depth buffer into a single-sampled FBO depth texture, then bind the depth texture for your main rendering pass.

May or may not be a win, but then at least then the cost isn't dependent on which window AA scheme is being used.

karx11erx
11-26-2007, 07:57 AM
I tried glCopyTexSubImage2D, and it is slow.

What I have figured I could do is to render the scene to an FBO, unbind the FBO, render the coronas to another FBO using the first FBO's depth texture, rebind the first FBO and render the corona texture over it.