Render for video, problem with blending

I have an application which renders various graphical elements using opengl and then transfers the rendered data to a videocard that generates a color and an key video signal used in broadcasting. The color is blended with other video signals in the mixer box based on the key. If the key is 1, the my graphics color is used, if the key is zero then the other video signal is used and in between they are blended.

The problem is that if I clear the framebuffer to black and for example render a red square with alpha 0.5, the resulting framebuffer color is ARGB=0.5,0.5,0,0 since the red is blended 50% with the black background.
What i do need is ARGB=0.5,1,0,0 since the color should still be pure red, it should just be semi transparent.

As it is now, any object which is semi transparent will also be too dark and the result when blending this with a video signal is incorrect.

I have read about premultiplied alpha, but fail to see how I could use that information to solve my blending problem.

In essence, I want my graphics to not blend with the background color. This might be solveable by using destination alpha rather than source, but I fear that would cause problems when I render several semi transparent partially overlapping objects in the same scene, since they need to blend as usual. Only the background should not be blended into the colors.

Might anyone have a hint for a solution?

Hi,

My idea would be to use FBO in order to be able to render the first layer without alpha blending and other layers using alpha blending.

I always render to an FBO, and as far as I know, it does not make any difference. The first transparent layer is blended with the background. The only remedy I could come up with is to divide the final RGB values with the alpha to make it ‘non-premultiplied’.

There is no first layer and second layer.
Anything rendered should blend with other things rendered but not with the background. The user controls how many objects there are and where they will be located.

It would seem that one could use the fact that there should be no blending with existing frame buffer pixels which have an alpha of 0, as the background has, and that there should be blending otherwise. I just cant seem to find any blend properties that would be that selective.

You are right. Blending with a pixel that has 0 for alpha is meaningless, in this case a simple replace would be ideal instead of blending. However, I don’t think there is such a setting.

I just made this up: :smiley:

First Pass: render all your elements without blending (sorted back to front)
Second Pass: render all elements with blending enabled on top of the first pass. Only render the alpha channel on this pass.

To get perfect results you should look at some blending extensions to OpenGL. ( EXT_blend_equation_separate come to mind… )

I do not see how that would work. If I have two semi transparent object, red and green, and they overlap with red closest, then the first pass will give black background, and solid green where is is not hidden (so far ok) and solid red where the red hides the green (not ok). The part where the red hides the green should be 50% red and 50% green and not solid red as it now is.

Perhaps I misunderstand you?

The seperate alpha blend for an alternative blend of the alpha chanal is required for perfect results as you say. I did that with directX previously before porting to opengl. With DX I had the same background blend problem btw and never found a solution. Now on opengl I hope there are some bright people with the answer :slight_smile:

There is no simple solution to this problem. If you’re doing transparency, and it’s really important that it be 100% accurate with arbitrary geometry, and you’re doing alpha blending (rather than additive blending), then you have to use some kind of order-independent transparency scheme.

The way alpha blending is done normally is to:

1: Sort blended objects, far to near.

2: Render them in that order.

If there is interpenetration, then you either accept the visual artifact or split the object in some way.

I think it is not exactly accurate. You should render your opaque objects normally, using the z-buffer and depth writes enabled and blending off. Then you render the objects with some kind of transparency, with blending on and depth writes off and sorted in the correct order from back to front.

Still it would not be perfect. All individual triangles or other primitives should be sorted from back to front (that is almost impossible).

But I think the original question was about the background: even the pixels of an empty buffer (0,0,0,0) are blended with the pixels of a transparent object which is wrong.

I seem to have confused you.
There is no problem with the alpha blending. My geometry is simple enough for me to render back to front on the fly.

The graphics that I generate looks correct enough. Everything blends nicely… but what I want for the pc monitor is not what I want for the exportet video signal.

I need objects to not blend with the background. In my view there is no such thing as a background.

Imagine that you need to render an imposter consisting of a number of semi transparent, partially overlapping, quads. You do not want the imposter to be pre blended with a black background. You want it to be the colors it is and then when you place it in your scene (as a textured object) there will be a background that it blends with… but not before.

Again… a pure red, but 50% transparent quad used as an imposter should in the texture color be pure red and have a=0.5. It should not be preblended with a black background resulting in a=0.5 and r=0.5

No, I understood you perfectly. Non-premultiplied rendering, which is what you want is a problem with opengl and directx as well afaik.

When the hardware does the blending there is no such thing as an empty background. There is a pixel that is already in the buffer (this is background whether it comes from clearing the buffer or not) and another pixel that is part of the object currently being rendered. The most common blending is done as RGB(fg)A(fg) + RGB(bk)(1-A(fg), where fg is the foreground pixel and bk is the background pixel. The value of A(bk) does not matter. In theory the hardware could handle A(bk)=0 as a condition but that would not be perfect either and would probably slow down the whole thing.

I have only two suggestions (none of them is perfect), maybe someone has a better one:

  1. Try to use ClearColor(1,1,1,0) instead of (0,0,0,0). If you output a key signal separately (external keying) it will help with the problem caused by premultiplying with the background.

  2. Make the end result non-premultiplied by dividing the final RGB values with their Alphas. You can easily do this in a shader and if you work with video, you can do this at the same time and in the same shader when you interlace from frames to fields. This is the solution I use if I want non-premultiplied output and it is working quite well.

Just a stab in the dark here…

each fbo must be video frame size (1920x1080 etc)

Clear fbo 1 with a a key colour. Render first overlay image WITHOUT blending.

Clear fbo 2 with key colour. Render second overlay without blending.

Using multitexturing with fbo 1 & fbo 2 as source textures, fbo 3 as destination, write a simple shader where -

if key colour exists in both fbo 1 & 2 pass key colour to fbo 3
if key colour exists in either fbo 1 OR fbo 2 pass source colour
if key doesn’t exist in either fbo, manually blend both colours.

Clear fbo 4 with key colour. Render 3rd overlay. Using fbo 3 & 4 repeat above as necessary.

Finally, replace all the key colour in the ultimate fbo with 0, 0, 0, 0.

In other words you need to build up your final overlay element by element. Does that make sense?

You do not need separate fbos for this at all. One is enough. If you render each layer to a separate texture without blending, then alpha composite the textures in a separate pass to a final buffer, it would work. But it would be a lot slower and you would not even use hardware blending at all.

But if you don’t mind, this could be a solution.

Note to self: give up multitasking!

You need 1 fbo & 3 frame sized textures.

As for speed, I assume the OP is essentially creating an offline ‘overlay video’, otherwise he could blend in real time onto the video background (depending on the hardware he’s using).

Another advantage to my method is that he can use fp16 textures and do proper colour space/gamma correction, linear blending etc (depending on the source of the overlays images)

Imagine that you need to render an imposter consisting of a number of semi transparent, partially overlapping, quads. You do not want the imposter to be pre blended with a black background. You want it to be the colors it is and then when you place it in your scene (as a textured object) there will be a background that it blends with… but not before.

Why would your imposter quads be overlapping? And what is it that you expect to happen in the overlapping portions? Which colors do you expect to see?

Again… a pure red, but 50% transparent quad used as an imposter should in the texture color be pure red and have a=0.5. It should not be preblended with a black background resulting in a=0.5 and r=0.5

The problem is that you’re trying to do this with overlapping, blended geometry.

If there was no overlap, then you could just render your imposter geometry without blending to a texture. The alpha values you write to this texture would be what they are, as would the colors.

But, due to the overlap, you need to write your imposter geometry with blending. You need your imposter geometry to somehow blend with some objects, but not others.

I can’t say what to do exactly, since I don’t understand how or why there would be any overlap in rendering an imposter. However, the stencil buffer may be of use to you here. If you can somehow determine which pixels constitute background, and which pixels constitute not-background, you can render the first kind with a stencil value, then render the second kind with a stencil test to cull not-background from background.

I’d like to know what happens when you try this.

How about if you make a shader that behaves the way you want? If the existing pixel is alpha zero then use all of the color, and just blend alpha. I suppose for each extra object, the current buffer would have to be available?

In fact, wouldn’t a separate blending function for alpha and RGB do what you want?

That should make your FBO for the video card. If you want to view it locally, you’ll have to re-render it over black as normal (or probably that’s where the stencil buffer could come in).

Bruce

It has been a while since I tried all the various blending functions, but I don’t think any of them would help. If you use blending at all, you always get a combination of ‘old’ pixel (that is already in the buffer) and ‘new’ pixel, there is no way to skip the background because its alpha is zero.

The only perfect way is to do the alpha composite in a shader and without using opengl blending at all. For this, all objects should be rendered from back to front and each to a separate texture, then in another pass these textures could be composited using a shader.

I don’t think the stencil buffer would help either. It either passes a pixel or not but it does not affect the way blending is done.

You haven’t really explained well what it is that you want. Write out a pseudo-code example of a fragment shader with framebuffer readback that does what you want to have happen.

I don’t have to explain, I did not open this topic. Ask Taicoon, although I think he explained what he wants quite well. The only thing I don’t know how important is speed to him.

So far the only solution I have been able to come up with is the manual blending where texture0 holds the farthest away object, texture1 the next farthest.
In a shader I can then do things the way I want.
This is what you suggest as well.

Render speed is not really a problem since the scenes are never complex (never more than 100 quads currently) and I need only render at 50 FPS. Therefore the elaborate way of rendering could be used. It just allways seemed like a silly hack that should not be needed, but after reading the responses here, it looks like there is no other solution.

I found an old article on gamasutra about imposters and it also concluded that transparency was a problem. Then it ment on to mention premultiplied alpha without hinting at how that would solve anything. Previously I read another article somewhere about premultiplied alpha and how it was good for imposters, but again I was unable to see how exactly that would help. If anything, it would make maters worse…