I am writing shaders for OpenGL ES on iPhone OS. Within a fragment shader I was shocked to discover the following:
These all work:
vec4 rgba = texture2D( myRGBATexture, v_st);
float r = texture2D( myRGBATexture, v_st).r;
float g = texture2D( myRGBATexture, v_st).g;
float b = texture2D( myRGBATexture, v_st).b;
These ALL return a constant value of zero (0), ignoring the contents of the one channel texture (!?!). Note, at texture creation time OpenGL says I have a texture colr as GL_ALPHA and data type as GL_UNSIGNED_BYTE:
float a = texture2D( myOneChannelTexture, v_st).r;
float a = texture2D( myOneChannelTexture, v_st).g;
float a = texture2D( myOneChannelTexture, v_st).b;
float a = texture2D( myOneChannelTexture, v_st).a;
Can someone please explain why GLSL forces me to waste tons of memory with a 3-channel texture when a 1-channel texture is all I need?
I believe it is something to the effect that variables declared in shaders are going to be 16 byte aligned anyway, which must have something to do with the register granularity.
You can certainly declare a vec4 instead of a float and use all 4 components to different ends.
A nice side effect is that even though you may change the texture type the shader won’t necessarily have to change (e.g. I could swap out a luminance texture and my color shader would correctly function as a grayscale shader)…
I don’t necessarily know what I am talking about, but its a guess based on various intimations I have picked up…