Merging with glBlendFunc()

I have a single texture which I crop and draw different portions of the texture.

One portion of the texture is (more or less) alpha mask for another portion of the texture.

What I want to do, is merge the two sections in a two pass rendering setup using glBlendFunc() so that the first section is drawn using the alpha of the second section).

The existing scene must be visible through the alpha sections of the two merged sections.

I have been trying to solve this with Trial and Error for awhile now, and Google hasn’t turned up any viable results (mostly multi-texturing examples, when I need a simple two pass)

If I use GL_ONE, GL_ZERO for the alpha section, and GL_ONE, GL_ONE for the normal image section, it seems to merge them well, but they do not show the existing scene through the alpha areas of the first section (as if I drew the whole combination without blending on).

What combination of glBlendFunc()'s do I need to use to achieve this?

Any help would be greatly appreciated.

What about using fragment shader to do it ?

Common uses of the alpha channel to get transparency are done with using GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA with the blending function.

I think you should do

  1. GL_ONE,GL_ONE for the “alpha only” part, but it must have 0 for RGB value. That way you write alpha (first GL_ONE), and keep background scene (second GL_ONE).
  2. GL_DST_ALPHA,GL_DST_ONE_MINUS_DST_ALPHA for the “color only” part, using alpha already present on the framebuffer. Be sure to request RGBA framebuffer, as by default you may get RGB only.

But this would be much easier to do both steps in a single pass with a fragment shader, as hinted by arts.

If the color of the alpha section is not black, you can either

a) use glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_TRUE)

b) use glBlendFuncSeparate to render just the alpha channel to the framebuffer with glBlendFuncSeparate(GL_ZERO, GL_ONE, GL_ONE, GL_ZERO)

c) modulate the texture with a black glColor().

Then use step 2 as ZbuffeR described.

I have learned enough OpenGL to get by in basic 2d game development for the computer and portable devices, and not much more. I am familiar with glBlendFunc() and a simple two pass for rendering this one object (it is a unique event, not something that happens often throughout the program) seems like the fastest (based on the amount of code involved, not on actual speed tests) most direct method to obtain the result I desire.

My goal is a solid blue box with horizontal bars of 50/50 translucency. I half expect to get a fully transparent box with some 50/50 transparent blue bars (because of the 100% transparent areas of the alpha image), but I get nothing…

Here is a link to the example texture I am using, and a screenshot of the application.

Texture:
http://i1128.photobucket.com/albums/m491/lra80/ExampleImage.png

Screenshot:
http://i1128.photobucket.com/albums/m491/lra80/Test.png

In the screenshot, it shows the entire texture being drawn (left), the alpha portion being drawn on top of the normal portion using standard blending (center), and the alpha portion being drawn under the normal image (right) using the recommended glBlendFunc’s. The (right) image, cannot be seen at all.

In the texture I added some random squares to better see the effects in case they were too subtle.

What am I doing wrong?

Are you sure you disabled depth testing ?

Disabling depth testing helps a great deal. Early tests look promising though I had to change the second GL_ONE to a GL_SRC_ALPHA.

Thank You all.

I find myself revisiting this topic.

My original project was using OpenGL ES on a hand held device, and with the comments in this post I was able to use glBlendFuncs to achieve my goal, however when I used the same exact process with the same exact images on a Linux/PC based application, I do not get any sort of alpha from the “alpha” image.

Is there anything special I need to do when initialising OpenGL to make this process work? I am using GLX_RGBA as an attribute when calling glXChooseVisual.

As an additional note, the same code, compiled on a different machine, running a different build of Linux, works as intended, so the code is sound. Any ideas why it won’t work on one platform, but works great on another?

The machine it is working on has FOSS graphics drivers, where as the one it is failing on has nVidia drivers.