Core GL_RAx formats

Currently Core GL doesn’t expose a RED + ALPHA format, while the 4.2 texture swizzle supports the functionality for reads, its doesn’t for frame buffer writes.

Use Case:
Pixel Blending supports separate RGB and Alpha operators, one extremely useful operation is doing a Min(val) and Max(val) in a single pass. This only requires a 2 channels format, but requires those channels to be one of RGB and one of Alpha, which currently doesn’t exist (2 channel format are RG). The work around is to use RGBA_x formats but when val is for things like depth this wastes storage and bandwidth (often as much as 64 bit per pixel for wasted the unused GB portion).

Another solution would be to support TEXTURE SWIZZLE like operation for output, or even to make the separate blender functionality work on any two channels…

Whilst image read/writes can support the same function, the extra performance cost is high.

GL_LUMINANCE_ALPHA? Or am I missing something here?

In the core profile of GL, the LUMINANCE and INTENSITY formats no longer exist. What he’s proposing is a more well-defined format for rendering.

LUMINANCE and INTENSITY formats are also not color-renderable. So you can’t use them as render-targets. That’s part of the reason why the ARB dropped them in favor of Red/Green formats.

Why not use GL_RGx formats? In core you are not limited to fixed functionality which needs alpha channels to know what to do, you can read the .rg components of the texture and output to alpha what you like. I do not see the need for special alpha formats in core!

The Pixel Blender has 2 separate units per target, an RGB and an ALPHA pipe, to use the ALPHA blender pipeline you have to output to the ALPHA channel.
Whilst the pixel blender unit is FIXED FUNCTION, it has an effect on where the fragment shaders can output. At the moment the lack of RA format means wasting one of the the pixel blender pipelines.
All the current solutions are fairly expensive (though using two single channels assigned to separate draw buffers is probably the cheapest) and so a simple additional texture format would be a useful addition to GL exposing existing HW functionality.

It’s a good idea. Not sure that hardware supports this but I don’t see why they wouldn’t.

+1 for this.

I don’t see where the waste is. You sample the texture and just output the red or green component to the alpha. The blending stage comes right after.

He’s talking about writing to the texture, not reading from it. He basically wants to write to an RG texture, such that the “R” component uses the RGB part of the blender, and the “G” component uses the A part. So they can have separate blend functions and blend equations.

Whats the problem?

GL_RGx + ARB_texture_swizzle should to the job.

  1. What is the demand for this extension?
RESOLVED: There are several independent demands for this, 
including:
- OpenGL 3.0 deprecated support for ALPHA, LUMINANCE, 
  LUMINANCE_ALPHA, and INTENSITY formats. This extension provides
  a simple porting path for legacy applications that used these
  formats.
- There have been specific requests for (1,1,1,a), or "white alpha"
  formats that allow a "decal" texture to be used in the same shader
  as an RGBA texture. This can be accomplished with an OpenGL 2.1 
  ALPHA texture by doing
    TexParameteri(target, TEXTURE_SWIZZLE_R, ONE);
    TexParameteri(target, TEXTURE_SWIZZLE_G, ONE);
    TexParameteri(target, TEXTURE_SWIZZLE_B, ONE);
    // TEXTURE_SWIZZLE_A is already ALPHA
  or equivalently
    GLint swiz[4] = {ONE, ONE, ONE, ALPHA};
    TexParameteriv(target, TEXTURE_SWIZZLE_RGBA, swiz);

or in OpenGL 3.0 “preview” contexts where ALPHA internal formats
are deprecated by using a RED texture:
TexParameteri(target, TEXTURE_SWIZZLE_R, ONE);
TexParameteri(target, TEXTURE_SWIZZLE_G, ONE);
TexParameteri(target, TEXTURE_SWIZZLE_B, ONE);
TexParameteri(target, TEXTURE_SWIZZLE_A, RED);
or equivalently
GLint swiz[4] = {ONE, ONE, ONE, RED};
TexParameteriv(target, TEXTURE_SWIZZLE_RGBA, swiz);

No, swizzle does not help when you use the texture as render target instead of shader input.

Currently there is actually no way to have a two component texture attached to a framebuffer object different than as an RG texture and RG share the same blending equation and function.

I see. So what happens when you have blending enabled? It treats the destination alpha as 1.0?

Why was the LUMINANCE_ALPHA format removed in GL 3?

I see. So what happens when you have blending enabled? It treats the destination alpha as 1.0?

The destination alpha would be the second component of a GL_RA texture (A).

Why was the LUMINANCE_ALPHA format removed in GL 3?

Luminance and Intensity mapped to multiple components: L => R,G,B and I => R,G,B. If rendering to a LUM_ALPHA format, the reverse mapping was not well defined. Does it take R,G or B for L? Because of this, LUM_ALPHA was never made a renderable texture format, which led to it being “replaced” by GL_RG. However this particular use case was overlooked, where GL_RG doesn’t work.

Granted, the ARB could have specified that when rendering to a LUM_ALPHA texture, only R mapped to L and G,B were discarded, instead of tossing the format entirely. But I suppose they had their reasons.

In fact, exactly this behavior is specified, in “Conversion to Framebuffer-Attachable Image Components” (per Table 3.16.)

However the language from ARB_fbo (see Issue 9) that made L/A/LA/I formats color-renderable wasn’t promoted into the core spec…

However the language from ARB_fbo (see Issue 9) that made L/A/LA/I formats color-renderable wasn’t promoted into the core spec…

Of course not. They don’t exist in core, so the language would be referring to texture formats that don’t exist. Though why they didn’t mention it in the compatibility specification is unknown.

Ah, I see the OP’s requirement.

I’d prefer to see the blend stage becoming programmable; that would resolve this requirement quite nicely and give some much needed functionality for other uses too.

However nice that would be, it would certainly require new hardware, while the OP is asking for something that can be done on current hardware. He’s just asking to have access to more of what current machines can do.

Fair point.