Wow, the subject requirements are tight. I apologize for any confusion.
Basically I’m looking at a routine I’m overhauling that reads a grey texture from a file, then wants to give it to Opengl as a LUMINANCE_ALPHA texture with the values simply copied for the alpha component. I believe this is the only way we were able to achieve desirable blending properties – for a simple soft font with highly layered tranparencies type visuals.
I’m curious if there might be a way to let Opengl worry about this rather than allocating a temporary doublee sized buffer, and manually copying/duplicating the pixel values.
There’s no point in having the same values in two channels. Simply use the greyscale image as ALPHA texture. Both the COMBINE texenv and shaders allow you to take the alpha channel of a texture and copy it to the other color channels.
The reason it’s not mentioned much is that shaders obsolete all of it, providing a <u>much</u> simpler interface. Texture combine (i.e. register combiners) is like telling someone else how to wire up a breadboard, connection by connection, verbally. Versus high level shaders (Cg/GLSL/HLSL/etc.) where you just do it, often all in one expression or line.
If Cg/GLSL is like C++, texture combine is like microcode.
Also shaders afford you much greater flexibility in the expressions and logic you can use.