Question about using Bitmap images as Textures

I have a quick question. I’m wanting to use bitmaps as textures in an OpenGL project. I plan on writing my own little library to handle the loading of simple bitmap images, but I need to know a little more about how OpenGL thinks about bitmaps first.

Here is my specific issue: Bitmaps store the image data as an array of pixels, BUT it monkies with that array a lot, so it isn’t straight forward. The “rows” of pixels of the image are inverted, so the first “row” of pixels in the array is actually the last row of the actual image. The color information of each pixel is also stored backwards, so it goes (B,G,R) instead of (R,G,B). Finally, each “row” of the array is padded with extra empty bytes to make each row a multiple of 4 bytes.

My question: Knowing this, I could make a class that loads this information into memory as it is, meaning everything inverted and with padding, OR I could take care of all that inverting/padding myself, generating an array of pixels that actually represents the image as it appears. Which would be the way to go as far as using this to make a 2D texture? I know that I would use glTexImage2D to make the texture, and since I’m using a bitmap I would use GL_BITMAP as the format parameter. But will OpenGL expect bitmap data that has been unmodified?

I hope this was clear enough. If you need more information on my problem please ask! :slight_smile:

since I’m using a bitmap I would use GL_BITMAP as the format parameter

Not so much :wink:

GL_BITMAP doesn’t mean “Windows BMP file”. It means “the image data is a series of bits, such that each bit represents a pixel, and can be either black or white”.

OpenGL expects pixel data to be provided in rows, where the first row is the bottom of the image (because OpenGL works in a coordinate system where (0, 0) of a texture is the bottom left). If your pixel data is not in this order, then that’s something you’ll have to adjust for.

Outside of that however, OpenGL is very flexible. You can use the GL_UNPACK_ALIGNMENT parameter to specify the byte alignment of each row of pixel data. So if each row starts on a 4-byte alignment no matter what, then you can specify that.

Similarly, you have pretty comprehensive control over the order of the components you pass to OpenGL. If the first component (when loaded as a sequence of bytes) is blue, you can use GL_BGR as your pixel transfer format. Details are available here.

Or, you could just ignore all of this and use one of the many available libraries for this task.

Thanks a lot for your reply! That’s very helpful, and those links are definitely something I’m going to be giving a good read. :slight_smile:

I don’t want to use an existing library because I’m trying to do everything “from scratch” as much as possible so that I learn as much as I can. If I understand you correctly, I can read the pixel data in from any bitmap image exactly as it is and then I can use GL_BGR as my Pixel Format, taking care of that issue. But which parameter are you referring to with GL_UNPACK_ALIGNMENT? The book I have shows glTexImage2D having parameters for target, level, pixel format, image data type, width, height, border, and a pointer to the pixels.

Finally, does this mean that I would use GL_UNSIGNED_BYTE as my texture data type? It seems like that would be the case to me, as bitmaps store pixels in arrays of bytes.

The GL_UNPACK_ALIGNMENT is set with glPixelStorei. You can set it to 1, 2, 4, or 8, which defines the alignment for each row in all subsequent pixel transfer operations (like loading to textures).

As for the unsigned byte question, the wiki article should explain it.

Ok, I have one more question: When I read the pixel data from the bitmap into an array in memory, what data type should the array in memory be?

According to the Wikipedia article on the Bitmap file format, the pixel data is stored in an array of DWORDs, so should my array in memory also be an array of DWORDs?

And if I do store all of the pixel data from the file into an array of DWORDs in memory, would I use GL_UNSIGNED_BYTE as my type parameter for glTexImage2D? I’m assuming that I would. It’s my understanding that this parameter specifies how the pixel information is laid out per pixel. I am using 24 bit-per-pixel bitmaps, meaning each pixel takes up 3 bytes of space, one byte for each color.

I think I have this right, but I want to make sure since this is a pretty complicated and advanced topic for me. It’s easy to think something is going to go wrong since I’m storing 3-byte pixels in an array of DWORDs and telling glTexImage2D that the type is unsigned bytes!