Blend selected color components?

Is it possible for a fragment shader to modify just one of r,g,b in the color buffer (using alpha blending) without disturbing the other 2?

Not in the fragment shader.
MAybe with programmable blending ?

Try this instead :
http://www.opengl.org/sdk/docs/man/xhtml/glColorMask.xml

Besides ColorMask you have other options with newer GPUs that are more flexible:

  1. You can use dual source blending: GL_ARB_blend_func_extended
  2. You can use limited programmable blending: GL_NV_texture_barrier

Of course, the simplest way is still ColorMask but if you would need more complicated blending techniques then this may come handy.

I don’t think ColorMask is what I want. It is documented as a property of the framebuffer, not the texture unit. What I want to do is essentially multitexturing from a single texture image, using a different set of TCs for each color component to correct chromatic aberration in photographic images. The idea is to get speed by using 3 texture units in parallel to do the 3 texture lookups.

Of course I could take 3 passes, each with a different TC pointer and ColorMask. I suppose that would be slower; but I’ll do it if I have to. Or use a frag shader that does 3 texture lookups and assembles the corrected pixel; but that too would run the lookups in series rather than parallel, as I understand it.

Aqnuep, how would you use dual-source blending?

I don’t see how NV_texture_barrier could help. It seems to allow rendering to and from a texture buffer at the same time; but if I were to write ca-corrected data back to the texture it would overwrite data needed later. And the lookups might still not be parallelized.

Am I missing something? I don’t see why you need to blend anything to do what you want. (Your comment that ColorMask isn’t the solution you seek because it is a property of the framebuffer seems to confirm that. Blending is a property of the framebuffer, as well.) And I don’t see how you can avoid three texture lookups (because each requires its own texture coordinates).

Why not just do something like:
color.r = texture (photo, vec2 (txt.x+redx, txt.y+redy)).r;
color.g = texture (photo, vec2 (txt.x+greenx, txt.y+greeny)).g;
color.b = texture (photo, vec2 (txt.x+bluex, txt.y+bluey)).b;
color.a = 1.0;

You cannot blend a pixel fragment with a pixel fragment located at different address in the framebuffer. So you need to correct for the chromatic aberration shift in the fragment shader, not in the framebuffer.

Or use a frag shader that does 3 texture lookups and assembles the corrected pixel; but that too would run the lookups in series rather than parallel, as I understand it

But this is totally doable !
You just need 1 texture sampler, and 3 different interpolated texcoords.
Not sure what you have in mind to do more parallel than this anyway…

EDIT: david was faster

Thanks, both.

Blending in the fb is just a requirement of the application, unrelated to ca correction.

A shader with 3 lookup calls against different samplers should be exercising 3 different hardware texture units; so the 3 interpolations might in principle run in parallel. However whether they do is up to the OGL implementation, and might depend on just how the shader is coded. I was hoping to find a method guaranteed to run the 3 texture interpolations in parallel.

I shall have to compare the 3-pass and the one-shader-3-calls ways, to see if there is any real speed difference. Of course the 3-pass will work on many more systems, so may finally be preferred even if there is a faster way on recent h/w.

Since the Nvidia TNT (TwiN Texel) introduced in 1998, the hardware is able to sample multiple textures at the same time…

So it will work 3 times faster, and for practically all hardware you can find.

A shader with 3 lookup calls against different samplers should be exercising 3 different hardware texture units

Don’t out-think the hardware. Don’t expect the hardware to be faster at accessing three 1D textures than it will accessing 1 3D texture.

Since the Nvidia TNT (TwiN Texel) introduced in 1998, the hardware is able to sample multiple textures at the same time…

The TNT did not access two textures at a time. It could access two textures per rendered triangle, but this required 2x the time as accessing one texel.

It wasn’t until the GeForce 4 days if not later that you could expect multiple texture accesses to occur simultaneously.

After some reading: it DID access two textures at a time. But the double texture mapping engine was also able to do single texturing twice faster.

Anyway, doing 2 separate passes blended together would still be slower than one dual-textured pass, right ?

Anyway, doing 2 separate passes blended together would still be slower than one dual-textured pass, right ?

Yes. The main gains were in not having to do the triangle setup for the second pass and not having to use blending to combine multiple textures.

If you would like to blend only e.g. the R and the B components of the color to the framebuffer, just output the following colors in your fragment shader:

color0 = your_normal_output_color;
color1 = vec4(alpha_factor, 0.0, alpha_factor, 0.0);

And in the API set blend function in the following way:

glBlendFunc(GL_SRC1_COLOR, ONE_MINUS_SRC1_COLOR);

And that’s all. However, as I know it is only available since Shader Model 4.1 GPUs.

However, as I know it is only available since Shader Model 4.1 GPUs.

Dual source is available in any 3.x hardware. It’s core in 3.3.

Sorry, my mistake, then it’s for Shader Model 4.0 GPUs (OpenGL 3.3).
I think I confused it with ARB_draw_buffers_blend as that is SM 4.1 stuff but only available in OpenGL 4.0.

I think I confused it with ARB_draw_buffers_blend as that is SM 4.1 stuff but only available in OpenGL 4.0.

The reason ARB_draw_buffers_blend existed as an extension before GL 4.0 is because ATI’s 3.x hardware could handle it. Similarly, it could do texture gather. So any HD-series card would be able to do either of these.

Is it only in dual-source that one can apply separate alpha values to the components of a color?

No, you could use the built-in CONSTANT_COLOR as the blend factor but then you could have separate blending factor values only on a per-draw call basis. If you need to be able to modify the blending factors within your shaders then dual-source blending is the only way.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.