Read in 16/18 bit Bitmap

hello,

i have code that reads in a 24 bit bitmap and would like to extend it to read in 16 and 18 bit bitmaps. however, i am a bit unsure of where to begin. For example, i’ve read that in a 16 bit color depth image, each color gets 5 bits and there is 1 alpha bit. Yet, in another place I’ve read that Red and Blue colors get 5 bits while Green gets 6 bits. Which is it? Reading in a 24 bit bitmap was not to much of hassle, however, i’m a bit stuck on trying to read in a 16 bit image. Any advice on where to begin? How much does the algorithm for reading a 16 bit image differ from reading a 24 bit image?

Regards,
JQ

There are different formats; one with an alpha bit and one without. I don’t understand why you need 16bit bitmaps nowadays, but if you are looking about info on BMP files, the msdn library has all the docs. Still, it would probably be wiser to use a custom library liek DevIL

The reason I’m using 16 bit bitmaps is because I want to save some memory space and I am working with a certain device which will drive a QVGA screen (TFT display). Also, I am using OpenGL ES 1.0, therefore, I don’t think I can use that custom library. Thanks though.

Regards,
JQ

The 565 format is the defacto standard for mobile devices. There’s got to be an example of directly & correctly loading of such 16bpp data into the gpu.
Make your own format, that simply converts 24-bit .bmp to 565:
typedef struct{
int width,height;
unsigned short data[];
}Bmp565;

See the docs/caps of the target device if 1555 texture-format is supported, and make another tool to convert 32-bit RGBA .tga, for transparent textures.

I was able to convert a 24 bpp image to a 16 bpp image with the 555 format. I tried getting it to work with the 565 format, however, the color of the image was not right. I wonder if has something to do with the file format of a bmp file? I’m not sure. For right now, 555 RGB works good, especially with dithering.

Thanks for the input.

Regards,
JQ

I believe bmp files store pixel data in BGR format.
So make sure your data type (GL_UNSIGNED_SHORT_5_6_5 or GL_UNSIGNED_SHORT_5_6_5_REV) and format (GL_RGB/GL_BGR) are right.

When using 16bits (bitfield) images, there are 3 double word color masks that describe the bits really used for each color.

Search for ‘bi_bitfields’ here :
http://atlc.sourceforge.net/bmp.html#_toc381201084

You can compare the various ‘bitfield’ binary headers on actual images at this page :
http://wvnvaxa.wvnet.edu/vmswww/bmp.html

I have visited these websites and have read over them thoroughly, however, I am still a bit confused. I understand the color masks and that all makes sense. However, I am unsure of where the actual raw image data begins. Is it right after the color masks? Is it a larger offset?

http://atlc.sourceforge.net/bmp.html#_toc381201084 states: “if bpp equals 16 or 32, the optimal color palette starts immediately following the three double word masks.”

However, I thought there was no color pallete for images with a color depth of 16 bits or higher. (BMP file format - Wikipedia)

I feel like i’m very near to a possible solution however I keep coming up short. I can make out what my image is, however, the colors are messed up. I think I might have some overlapping bits or something. I know the problem is not as easy as my red and blue colors being swapped. Any insight would be appreciated.

Regards,
JQ

Just convert those 24-bit .bmp files to a custom 565 .bin , and on load-time send this 16-bit data into a texture directly. You don’t need to meddle with how GDI or whatever lib (of your mobile device) needs .bmp files to be constructed.

int wid,hei;
f1 = fopen(“my565.bin”,“rb”);
fread(&wid,4,1,f1); fread(&hei,4,1,f1);
short* bits = malloc(widhei2);
fread(bits,2,wid*hei,f1);
fclose(f1);
MakeOpenGLTextureWhatever(GL_UNSIGNED_SHORT_5_6_5, wid,hei, bits);

At my job we usually also compress those “.bin” files with lossless compression, since mobile devices have memory limits… (and lossy compression is way too slow to decompress)