Merging of texture based on color value using shader programme

Hi,

I have two textures :

texture1 : 8bit Luminance
texture2 : 16bit RGB5_A1

I want to merge these two textures based on RGB value of texture2. If texture2 is having RGB(0,0,0), i want to treat is as transparent, any other color value as opaque. I have written following shader prog for this :

void main (void)
{
vec4 texval1 = texture2D(myTexture1, vec2(gl_TexCoord[0]));
vec4 texval2 = texture2D(myTexture2, vec2(gl_TexCoord[1]));

if( (texval2.r != 0.0) || (texval2.g != 0.0) || (texval2.b != 0.0)  )
{
	gl_FragColor = texval2;
}
else
{
	gl_FragColor = texval1;
}

}

Everything looks ok, but when I am writing RGB(1,1,1) on texture2, its also getting treated as transparent !!!

Any suggestions ?

Thanks in advance.

Did you set the samplers correctly?
Is RGB using normalized float or int (GLubyte texture data)? If latter, RGBA5551 has not enough precision to distinguish 1/255 steps!

Your complicated if-statement should use inbuilt functions like
if (any(greaterThan(texval2.rgb, vec3(0.0))))
or
if (dot(texval2.rgb, vec3(1.0)) > 0.0)

Thanks for your reply. I tried both of your suggestions but it didnt work :frowning:

The fact is that I am not able to see anything which is drawn with RGB(0,0,0), RGB(1,1,1), RGB(2,2,2)…RGB(7,7,7). With RGB(8,8,8) i am able to see the graphics.

I am setting the texture using following call :

::glTexImage2D(GL_TEXTURE_2D,
0,
GL_RGB5_A1,
1024,
1024,
0,
GL_BGRA_EXT,
GL_UNSIGNED_BYTE,
pBits);

The important thing here is that ‘pBits’ is pointer returned by following call :

m_pBMPI->bmiHeader.biBitCount = 32;
m_pBMPI->bmiHeader.biCompression = BI_RGB;

m_pBMPI->bmiHeader.biSize        = sizeof(BITMAPINFOHEADER);
m_pBMPI->bmiHeader.biWidth       = nWidth;
m_pBMPI->bmiHeader.biHeight      = nHeight;
m_pBMPI->bmiHeader.biPlanes      = 1;
m_pBMPI->bmiHeader.biSizeImage   = m_pBMPI->bmiHeader.biWidth  * 
                                   m_pBMPI->bmiHeader.biHeight * 
                                   (m_pBMPI->bmiHeader.biBitCount / 8);

m_hBmp = ::CreateDIBSection(m_hMemDC,
m_pBMPI,
DIB_RGB_COLORS,
(void **)&pBits,
NULL,
0);

The reason for using ‘GL_BGRA_EXT’ in texture is to use this buffer ( which is connected to a DIB ) directly with openGL.

Using ‘GL_RGB’ ( or ‘GL_RGBA’) instead of ‘GL_RGB5_A1’ solved the problem. But I can’t use ‘GL_RGB’/‘GL_RGBA’ because I have to support both 16-bit as well as 32-bit bitmap.

Previously my texture merging logic was based on alpha channel. That was the reason why ‘GL_RGB5_A1’ was used because it works for both 16/32 bit.

Now I want to treat RGB(0,0,0) color as transparent. So I am using RGB(1,1,1) in place of black ( RGB(0,0,0) ) color. Which doesn’t seem to work bcos of precision issue.

So What color i should treat as transparent so that i will work in both 16/32 bit case ? What should be the format then ?

Of course 0 to 7 will all be the 0 case in the RGBA5551 case, because you only have 32 steps in the channels which means 0, 8, 16 and so on are distinguishable, nothing in between.

The question is not what color you should use for transparent, black is fine, it’s which non-black color value is the first opaque color inside your shader.
You only need to ensure that the next non-black RGB values is representable inside the internalFormat. That is, in RGBA5551 all colors with channel values >= 8 would be opaque in your shader. In RGBA8 all >= 1. It’s really simple.

BTW, the correct format would be GL_RGBA8. GL_RGBA (old school OpenGL 1.0, defined as 4) might be reduced in precision depending on the current desktop resolution.

>>because I have to support both 16-bit as well as 32-bit bitmap<<

Why is that?
If you rely on the full precision bitmap input, you can’t use RGBA5551.
You could change the input data to map colors 1 to 7 to value 8 before the download in the RGBA5551 case without serious differences.

Thank you very much for nice explaination.

Do I save any memory on host/graphics card if I use ‘GL_RGB5_A1’ keeping the bitmap to still 32 bit ??

If not, then it looks like I am loosing image quality for nothing.

The internalFormat decides which storage is used on OpenGL side.
The format and type parameters define the users input data format.
You can throw RGBA UNSIGNED_BYTE data at OpenGL and the RGBA5551 internalFormat invokes a conversion, cutting the least significant bits from the 8 bit input data.
The downloaded texture will be half the size and 1/8 of the precision. I thought that was clear. :slight_smile:

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.