Wasteful alpha blending with MRT

I was wondering if there is any way to set an MRT to blend using the alpha channel of a specific output. I have a deferred shader that stores diffuse, normal, emission, specularity, and some other information. This is packed pretty tightly into three RGBA8 textures.

With alpha blending, each output has to be given the same alpha value. This is pretty wasteful and forces me to render to an additional texture:
0: diffuse.r, diffuse.g, diffuse.b, diffuse.a
1: normal.x, normal.y, normal.z, diffuse.a
2: emission.r, emission.g, emission.b, diffuse.a
3: specular, materialflags, <nothing>, diffuse.a

Is there any way to force the GPU to blend using the alpha value of my diffuse output?

Your problem is a conceptual one. Namely, that you want to do true blending while building your G-buffers.

Conceptually, what does it mean to do a 50/50 blend between two objects? It should mean that the light you see at that pixel comes 50% from one object and 50% from another.

If you did a 50/50 blend between two normals before doing lighting, then you come up with a normal that, after renormalization, is neither one or the other. Your lighting computation will therefore not be a 50/50 mix; it will be some other value entirely that represents neither surface.

Emission is even worse, since the emitted light should be additively blended between the two objects, not linearly interpolated. You can use the alpha to reduce the amount of emission from a surface, but the actual blend equation ought to be addition, not a linear blend.

In short, you can’t do blending this way. If you’re applying some kind of decal, then you’re just replacing some of the parameters in some way. But if you’re doing real blending between objects, you have to do that after the deferred pass, after you compute the lighting, in order to get reasonable results.

I understand the renderer isn’t going to solve scientific optics calculations, but it will do what people want. A normal that is the average of two surfaces is fine. The edge case of an emissive decal on top of an emissive surface is so unlikely and unimportant it’s not even worth considering.

Rendering decals after lighting would require a second pass for lighting, something I definitely don’t want to do.

I’m fairly certain this is impossible, but every once in a while someone suggests a surprising solution, so I thought it was worth checking.

A normal that is the average of two surfaces is fine.

No, it really isn’t. We’re not talking about some kind of rounding error or minor inaccuracy. We’re talking about something that is out-and-out wrong and makes absolutely no sense in terms of lighting. While diffuse reflectance may be linear, specular reflectance is most assuredly not.

The edge case of an emissive decal on top of an emissive surface is so unlikely and unimportant I would never make this a consideration.

And yet, that’s the case that’s trivially easy to solve. You change the blend func/equation on that particular output, setting it to be additive, and doing the multiply with the alpha in your shader.

Rendering decals

Decals don’t tend to be translucent; they actively replace what is under them. So you shouldn’t need blending at all, even at their edges.

require a second pass for lighting, something I definitely don’t want to do.

This is how deferred renderers work. Transparent objects happen in a later pass. Everyone else’s deferred renderer does it.

I’m fairly certain this is impossible, but every once in a while someone suggests a surprising solution, so I thought it was worth checking.

If you absolutely insist on doing this, you can always do the blending manually in your shader. This would be done by using the destination images as both textures and framebuffer attached images simultaneously. So you’ll need to employ either ping-ponging or texture barrier.

The normalized average of (1,0,0) and (0,1,0) is ~(0.707,0.707,0). It’s only a problem is the two vectors point directly away from each other. I can live with that.

Decals don’t tend to be translucent; they actively replace what is under them. So you shouldn’t need blending at all, even at their edges.

Just doing a discard is another possibility I considered but I don’t think my users will see it that way.

This is how deferred renderers work. Transparent objects happen in a later pass. Everyone else’s deferred renderer does it.

I mean I want to avoid two lighting passes. We have a pixel discard that simulates transparency pretty effectively and retains correct lighting on the “transparent” surface and what lies “beneath” it (but doesn’t really).

If you absolutely insist on doing this, you can always do the blending manually in your shader. This would be done by using the destination images as both textures and framebuffer attached images simultaneously. So you’ll need to employ either ping-ponging or texture barrier.

Yes, this is what I would do if I were not using MSAA textures. I do a lot of ping-ponging in the post-processing steps after lighting, but don’t want to create more big multisample textures before lighting.

I find some of your design choices here to be… confusing.

On the one hand, you consider blending normals to be an acceptable loss of accuracy. But on the other hand, you seem to be perfectly fine with the visual “accuracy” of stipple-based transparency. Discarding with decals is something you don’t think your users will accept, but you think that they’ll be willing to suffer the performance pain of doing multisampling in a deferred renderer.

It should be noted that most other deferred renderers don’t do what you’re doing. Most of them don’t use multisampling (preferring fakery to true antialiasing). They do use multiple lighting passes, one for opaque and one for translucent. They don’t use blending on decals. And so forth.

It just seems strange that you’ve made a lot of design choices that are very much against the grain for the industry.

Here’s the solution I came up with, which doesn’t require changing the original layout of the MRT. The original design just happened to work out favorably:

0: diffuse.r, diffuse.g, diffuse.b, diffuse.a
1: normal.x, normal.y, normal.z, specular
2: emission.r, emission.g, emission.b, materialflags

Alpha blending is used on outputs 0 and 2. Using glColorMask(), alpha write is disabled on output 2. (The materials flags value is bit flag value, so blending it would not make any sense.)

Blending is disabled for output 1. This needs to be done because the decal shader reads the screen normal and uses that to assign texture coordinates based on the major direction the pixel faces. This allows decals to map well on any surface regardless of orientation. Since we have to do the blend manually in the shader anyways, we can just mix the specular value at the same time according to the diffuse texture’s alpha channel.

So we only have to create one extra 2DMS texture and this handles decals that blend diffuse, normal, specular, and emission values.

The results:
[ATTACH=CONFIG]1163[/ATTACH]
[ATTACH=CONFIG]1162[/ATTACH]

Thanks for the tip on separate blend equations, I did end up using that.