Alpha blending with multiple color attachments

I started a topic on this a while back, and now I want to ask if OpenGL3/4 offers any solution to this problem.

I use alpha blending in my deferred renderer for things like roads and decals, before lighting is performed. It works because the depth of the two blended surfaces is the same, and the normals and diffuse colors just get blended. Lighting is performed on the resulting values, and works just fine.

The problem is that to use alpha blending, each color attachment must write the alpha value into the alpha channel, of each attachment.

Here’s the gbuffer format I want to use:

Color0 - RGBA8 (4 bytes)
R - diffuse red
G - diffuse green
B - diffuse blue
A - diffuse alpha

Color1 - RG11B10 (4 bytes)
R - normal x
G - normal y
B - normal z

Color2 - RGBA8 (4 bytes)
R - emission red
G - emission green
B - emission blue
A - specular intensity

Total = 12 bytes

Here’s the gbuffer format I must use:

Color0 - RGBA8 (4 bytes)
R - diffuse red
G - diffuse green
B - diffuse blue
A - diffuse alpha

Color1 - RGBA16F (8 bytes)
R - normal x
G - normal y
B - normal z
A - diffuse alpha

Color2 - RGBA8 (4 bytes)
R - emission red
G - emission green
B - emission blue
A - diffuse alpha

Color3 - RGBA8 (4 bytes)
R - specular intensity
G - extra
B - extra
A - diffuse alpha

Total = 20 bytes

Is there any way to gain more control of blending in OpenGL 3/4 so that the alpha channel doesn’t have to be wasted like this?

The problem is that to use alpha blending, each color attachment must write the alpha value into the alpha channel, of each attachment.

Why are you using the diffuse alpha to blend, for example, normals? Your normals won’t necessarily be normalized after a linear blend.

Also, just because you output RGBA doesn’t mean that the color buffer has to have all four components.

It seems to me that your decals should only be affecting the diffuse color (or possibly the emission), so you should do blended rendering only with those color buffers active that you actually want to change. Just use glDrawBuffers to turn off writing to the components you don’t want updated, then don’t write to them in the shader.

It’s very important that decals affect normals. Imagine a patch of dirt on a terrain. It’s faded around the edges, so it blends with the terrain, but in the center, the dirt’s surface details are quite opaque, and it has its own normal map.

The best alternative I can think of is this:

  1. Render to first gbuffer above.
  2. Copy depth to another FBO and render decals (without MSAA, maybe half-size)
  3. Combine both in the first lighting shader.

It wouldn’t save any memory with a non-MSAA gbuffer, but it would save a lot if the gbuffer were using MSAA and the decal pass wasn’t.

Except for the “Color2”, where you’re combining emissive and specular in one write, I don’t see what the problem is. If your output color buffer doesn’t have an alpha channel in it, you can still do color blending with it; just don’t read from the destination alpha (since there isn’t one).

However, you can always try ARB_blend_func_extended. What this allows you to do is write two colors for a particular buffer. The blend function can now do blending based on the two source colors and the destination color. So you can do a linear blend between src0 and dest based on src1.alpha.

However, using this will cut down the number of draw buffers you can draw to. You’ll have to query MAX_DUAL_SOURCE_DRAW_BUFFERS to find out. I’ve never used it, so I don’t know how much it cuts down on drawbuffers. It’s a GL 3.3 core feature.

New info: I tested GL_MAX_DUAL_SOURCE_DRAW_BUFFERS, and for my NVIDIA GT 250, it came out as 1. Which is not much.

I did some checking with Direct3D, and D3D 10 limits dual source to writing to one buffer. OpenGL simply leaves it a queriable value, but I would guess that it’s probably not going to be above 1. At least, not for D3D10 hardware.

it came out as 1. Which is not much.

Haha, thanks for researching it.

Sorry for restarting the topic, but your findings made me very curious about whether AMD exposes more dual source draw buffers but it turned out that my HD5770 also supports only one dual source draw buffer :S

Such a shame… I expected to be just the half of the normal draw buffers, not just one…

After a bit thinking I realized that maybe the best for you would be to use the GL_NV_texture_barrier extension that allows you to render to a currently bound texture. This way you can perform your own filtering algorithm.
It is not so straightforward to make it work correctly but I think it worths a try.