PDA

View Full Version : General Image Loading Questions (BMP Transparency)



g0bl1n
09-19-2011, 09:02 PM
I've been playing with OpenGL for a while now, but I have never completely understood the image loading process. From what I understand images are loaded into an array of pixels for display. I don't know how that data is stored.

I've been trying to come up with a way to have the top left pixel of a BMP be the transparency color for the whole image. So I thought to myself, this is easy enough, all I have to do is record the top left pixel value and find each pixel of the same color and set it to a transparency.

But so far I do not understand how to do that, it seems so simple but for some reason I do not even know where to start.

Here is the image class from the image loader I am using:

glTexImage2D(GL_TEXTURE_2D, //Always GL_TEXTURE_2D
0, //0 for now
GL_RGB, //Format OpenGL uses for image
image->width, image->height, //Width and height
0, //The border of the image
GL_RGB, //GL_RGB, because pixels are stored in RGB format
GL_UNSIGNED_BYTE, //GL_UNSIGNED_BYTE, because pixels are stored
//as unsigned numbers
image->pixels); //The actual pixel data

I'm guess I change the GL_RGB to GL_RGBA?

glTexImage2D(GL_TEXTURE_2D, //Always GL_TEXTURE_2D
0, //0 for now
GL_RGBA, //Format OpenGL uses for image
image->width, image->height, //Width and height
0, //The border of the image
GL_RGBA, //GL_RGB, because pixels are stored in RGB format
GL_UNSIGNED_BYTE, //GL_UNSIGNED_BYTE, because pixels are stored
//as unsigned numbers
image->pixels); //The actual pixel data
Then from there I'm kind of lost, here is the loading function:

Image* loadBMP(const char* filename) {
std::ifstream input;
input.open(filename, std::ifstream::binary);
assert(!input.fail() || !"Could not find file");
char buffer[2];
input.read(buffer, 2);
assert((buffer[0] == 'B' && buffer[1] == 'M' )|| !"Not a bitmap file");
input.ignore(8);
int dataOffset = readInt(input);

//Read the header
int headerSize = readInt(input);
int width;
int height;
switch(headerSize) {
case 40:
//V3
width = readInt(input);
height = readInt(input);
input.ignore(2);
assert(readShort(input) == 24 || !"Image is not 24 bits per pixel");
assert(readShort(input) == 0 || !"Image is compressed");
break;
case 12:
//OS/2 V1
width = readShort(input);
height = readShort(input);
input.ignore(2);
assert(readShort(input) == 24 || !"Image is not 24 bits per pixel");
break;
case 64:
//OS/2 V2
assert(!"Can't load OS/2 V2 bitmaps");
break;
case 108:
//Windows V4
assert(!"Can't load Windows V4 bitmaps");
break;
case 124:
//Windows V5
assert(!"Can't load Windows V5 bitmaps");
break;
default:
assert(!"Unknown bitmap format");
break;
}

//Read the data
int bytesPerRow = ((width * 3 + 3) / 4) * 4 - (width * 3 % 4);
int size = bytesPerRow * height;
auto_array<char> pixels(new char[size]);
input.seekg(dataOffset, std::ios_base::beg);
input.read(pixels.get(), size);

int transcolor;

//Get the data into the right format
auto_array<char> pixels2(new char[width * height * 3]);
for(int y = 0; y < height; ++y) {
for(int x = 0; x < width; ++x) {
for(int c = 0; c < 3; ++c) {
pixels2[3 * (width * y + x) + c] =
pixels[bytesPerRow * y + 3 * x + (2 - c)];
}
}
}

input.close();
return new Image(pixels2.release(), width, height);
}
Would I just check during the reading process, then set a corresponding "alpha byte(s)"?

EDIT:
I've been playing around with this and I haven't been able to get anything to work. I allocate enough room for a 32 bit BMP, load in the 24 bit BMP, then as a test I set the 4th set to an alpha value and I cannot even tell if it is my code or the transparency code that isn't working. Here is the read in stuff:

//Read the data
int bytesPerRow = ((width * 3 + 3) / 4) * 4 - (width * 3 % 4);
int size = bytesPerRow * height;
auto_array<char> pixels(new char[size]);
input.seekg(dataOffset, ios_base::beg);
input.read(pixels.get(), size);

//Get the data into the right format
auto_array<char> pixels2(new char[width * height * 4]);
for(int y = 0; y < height; y++) {
for(int x = 0; x < width; x++) {
for(int c = 0; c < 3; c++) {
pixels2[4 * (width * y + x) + c] =
pixels[bytesPerRow * y + 3 * x + (2 - c)];

//std::cout<<int(pixels2[4 * (width * y + x) + c])<<" : ";
}

pixels2[4 * (width * y + x) + 4]=1;

//std::cout<<int(pixels2[4 * (width * y + x) + 4])<<endl;
}
}
And here is the texture generator:

//Makes the image into a texture, and returns the id of the texture
GLuint loadTexture(Image* image) {
GLuint textureId;
glGenTextures(1, &amp;textureId); //Make room for our texture
glBindTexture(GL_TEXTURE_2D, textureId); //Tell OpenGL which texture to edit
//Map the image to the texture
glTexImage2D(GL_TEXTURE_2D, //Always GL_TEXTURE_2D
0, //0 for now
GL_RGBA, //Format OpenGL uses for image
image->width, image->height, //Width and height
0, //The border of the image
GL_RGBA, //GL_RGB, because pixels are stored in RGB format
GL_UNSIGNED_BYTE, //GL_UNSIGNED_BYTE, because pixels are stored
//as unsigned numbers
image->pixels); //The actual pixel data
return textureId; //Returns the id of the texture
}
Finally here is the transparency code I'm using:

glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);
//...
glDisable(GL_BLEND);
But no matter what I set the glColor4f before the drawing calls or set the alpha bits to a value all I get is a black screen. If I comment out the blend call I can see my all of the stuff on the screen and the texture is correctly applied, but there isn't any transparency...