PDA

View Full Version : Blending a complex plane with itself



Utumno
11-02-2015, 07:53 AM
In my (Android OpenGL ES 3.0-based) app, I need to render a complex, self-obstructing surface with DEPTH_TEST and BLEND enabled:

GLES20.glEnable (GLES20.GL_DEPTH_TEST);
GLES20.glEnable(GLES20.GL_BLEND);
GLES20.glBlendFunc(GLES20.GL_SRC_ALPHA, GLES20.GL_ONE_MINUS_SRC_ALPHA);

A video of an example application rendering a fully opaque (alpha-1.0 everywhere) flat surface with a bubble growing out of it can be seen here:

https://www.youtube.com/watch?v=oIpVoiFUvyw

As you can see, when the bubble reaches maximum height, its top obstructs parts of the surface below it. If you look closely, you will see that only the bottom half of the bubble gets rendered correctly, and the bubble's top half ends up mostly transparent with the surface below being opaque, which is visually incorrect.

Now, I think I know why this happens: it is because of the order the fragments get rendered to the screen. The fragments must get drawn from bottom to top, so in case of the bottom half, the background below gets rendered first and the whole thing looks correct; however in case of the top half, it is the bubble that gets rendered first, and the background below it after, thus the fragments of the background actually end up being the SOURCE in the

GLES20.glBlendFunc(GLES20.GL_SRC_ALPHA, GLES20.GL_ONE_MINUS_SRC_ALPHA);

equation, thus they end up opaque (even though they are further from the camera and should have been obstructed by the bubble).

So I know what happens, but I don't know how to fix it. Looks like when BLEND is switched on, the fragments further from the camera can still end up obstructing the fragments that are closer, even though DEPTH_TEST is enabled.

One way to fix it would be to switch off BLENDing, and that indeed makes the bubble get rendered correctly. This however I cannot do (or I think I cannot?), because there's an additional requirement: it is possible that the (possibly self-obstructing) surface gets rendered on top or another surface, and parts of it can be semi-transparent in which case I DO want to blend them with the surface below.

Actually reading myself I realized that what I want is the equivalent of glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA), but where the Source is always the fragment that is closer to the camera (has lower z-value). This way blending would be correct regardless of the order fragments get rendered. Really it seems to me this would be an obvious functionality to have and something OpenGL hardware could implement very cheaply, but after searching for it, it looks like OpenGL simply does not support it...

Utumno
11-02-2015, 01:35 PM
Looks like technique called 'Depth Peeling' can help here:


1) Render scene with the usual settings (depth testing enabled, depth function LESS, color and depth write enabled), but render only the fully opaque geometry. If opacity is per object, you can handle that by skipping draw calls for non-opaque objects. Otherwise, you will have to discard non-opaque fragments in the fragment shader.
2) Render the non-opaque geometry with the same settings as above, except that color write is disabled.
3) Render the non-opaque geometry again, but this time with depth function EQUAL, color write enabled again, depth write disabled, and using blending.

Any advice?

GClements
11-02-2015, 05:21 PM
Looks like technique called 'Depth Peeling' can help here:
Depth peeling requires two depth buffers: one for the farthest pixel, one for the nearest. Rendering uses multiple passes (either a fixed number of passes, or until no more fragments are rendered). On the first pass, you determine the nearest fragment; on the next pass, the second nearest; and so on. On each pass, you discard fragments whose depth is less than or equal to any previously-rendered fragment (i.e. those rendered in previous passes).

During each pass, the "far" depth buffer is a normal depth buffer, cleared to the far plane then used for depth tests and updated by depth writes with a depth function of GL_LESS. The near buffer is a texture which is read by the fragment shader and compared against the current fragment's depth; only fragments farther than the value from the near buffer are drawn. At the end of each pass, the two depth buffers are swapped so that the resulting "far" depth buffer (containing the depth of the closest fragment which was actually rendered) becomes the "near" buffer for the following pass.

The result is that the "layers" of the translucent geometry are rendered from front to back. Each layer has to be composited *under* the previous layer. The final result has to be composited *over* the opaque geometry.

Alternatively, you can use the same process but from back to front, which can make the compositing simpler. However, in that case, you need to use as many passes as there are layers. With the front-to-back approach, limiting the number of passes discards layers which are behind several other translucent layers (and thus are typically barely visible). But when rendering back-to-front, the front-most layers would be discarded.

If you don't need correct blending between the layers of the translucent geometry, a simpler solution which may be adequate for some purposes is to render the translucent geometry to a separate framebuffer without blending, then blend the result over the opaque geometry.