PDA

View Full Version : displaying a big bitmap



fortikur
01-16-2003, 08:26 AM
I have a 256*256 .BMP (24 bit) as my background. When I display it, it works correctly, but if I change the .BMP to 512*512 and change the values in the algorithm according to these values, I get a weird picture (totally mixed up colors). I tried it with the same bitmap scaled down to 64*64 and it worked. So, the problem shouldn't be in the .BMP file.
What I do is:

//************************************************** ******************************************
Graphics::TBitmap* back;
back = new Graphics::TBitmap;

GLubyte ***textureimage;

//memory alloc
textureimage = new unsigned char **[512];

for(p=0;p<512;p++)
textureimage[p] = new GLubyte *[512];

for(n=0;n<512;n++)
for(p=0;p<512;p++)
textureimage[p][n] = new GLubyte [4];

//load BMP
back->LoadFromFile(the_filename_of_the_texture);

//get colors from BMP and stuff them into "textureimage"
for(int i = 0; i < 512; i++)
for(int j = 0; j < 512; j++)
{
textureimage[511-j][i][0]= (GLbyte)GetRValue(back->Canvas->Pixels[i][j]);
textureimage[511-j][i][1]= (GLbyte)GetGValue(back->Canvas->Pixels[i][j]);
textureimage[511-j][i][2]= (GLbyte)GetBValue(back->Canvas->Pixels[i][j]);
textureimage[511-j][i][3]= (GLbyte)255;
}


//building texture
glGenTextures(1, &background[1]);
glBindTexture(GL_TEXTURE_2D, background[1]);

glTexImage2D(GL_TEXTURE_2D, 0, 4, 512, 512, 0,
GL_RGBA, GL_UNSIGNED_BYTE, textureimage);

glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTE R,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTE R,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);

//************************************************** ******************************************

In the original version, that ran well, "512" was "256" and "511" was "255".
When I change "GLubyte ***textureimage;" to "GLubyte textureimage[256][256][4]" it works fine, but with [512][512][4] I get a stack overflow (which is normal). So, I tried with pointers, but now it is impossible to get a correct map.
What did I do wrong?
How can I use a bigger (512*512*4, or even bigger (1024*1024*4)) bitmap as a texture?

01-16-2003, 10:20 AM
Below is some code that will assist you but I would like to mention a few things:

First of all, the reason your program worked with GLubyte textureimage[256][256][4] is that all of the pixels were assigned one contigious strip of memory. Once you changed it to GLubyte ***textureimage; you now do not have your texture in one strip of memory anymore - it is now scattered all over the place. You must have the pixel data packed into one contigious strip of memory for glTexImage2D() to work. You must pass the starting location of that data.

The second point to mention is that (even if it did work) you have greater than 100% overhead for each pixel! For every 4 byte pixel you set up with the new operator - you have a 4 byte pointer. For every row of pixels you have another pointer. This, even if it did work, is very inefficient. What you do not see here is that every "new" operation also has additional memory overhead that eats up memory and thus an allocation of 4 bytes for a pixel is much more than the 4 bytes you asked for. The unseen additional memory is so that the memory management routines can keep track of each individual allocation.

One question - Are you new to the C language? I am not trying to be offensive here - a couple of people I met (who just started with C but knew other languages) thought that:

int array[10][20];
was the same as:
int * array[10]; for (int i=0; i<10; i++) array[i]=new int[20];

The confusion generally is that array[n] in each version gives you an address and thus looks identical but yet the way both arrays are stored is quite different. I get the feeling that you may not fully understand how C handles stuff like this. If this is the case, I suggest searching for documentation on how C handles pointers, arrays, address arithmetic, etc. If you need assistance on this, I am sure other people here would help and/or give you links to documentation for this. It's been so long since I looked for this info, it would be appreciated if some people knew of some cool sites offhand.


Although this is not exact drop-in code for you, this is basically what you need:

GLuint background;

int TextureSizeX=512; // I hate hardcoded values
int TextureSizeY=512;

{ // Version 1 - Using a structure for each pixel
struct rgba_t {
GLubyte r,g,b,a;
};
rgba_t * textureimage;
textureimage=new rgba_t[TextureSizeX*TextureSizeY]; // One strip of memory
if (!textureimage) { /* Do something */ }// Insufficient memory?
else {
// Read the bitmap into texture at this point.
// Each pixel is defined as:
// rgba_t * pixel=textureimage+SomeRow*TextureSizeY+SomeCol; // OR
// rgba_t * pixel=&textureimage[SomeRow*TextureSizeY+SomeCol];
// You then can say pixel->r=value; pixel->g=value; etc.
glGenTextures(1,&background);
glBindTexture(GL_TEXTURE_2D,background);
glTexImage2D(GL_TEXTURE_2D,0,4,TextureSizeX,Textur eSizeY,0,GL_RGBA,GL_UNSIGNED_BYTE,textureimage);
}
}

// Alternately you can ignore the rgba_t struture and simply deal with bytes
{ // Version 2: Using simple GLubytes for each pixel
GLubyte * textureimage;
textureimage=new GLubyte[TextureSizeX*TextureSizeY*4]; // One strip of memory
if (!textureimage) { /* Do something */ }// Insufficient memory?
else {
// Read the bitmap into texture at this point.
// Each pixel is defined as:
// GLubyte * pixel=textureimage+(SomeRow*TextureSizeY+SomeCol)* 4; // OR
// GLubyte * pixel=&textureimage[(SomeRow*TextureSizeY+SomeCol)*4];
// Then you can have pixel[0], pixel[1], pixel[2], pixel[3] for r,g,b,a
glGenTextures(1,&background);
glBindTexture(GL_TEXTURE_2D,background);
glTexImage2D(GL_TEXTURE_2D,0,4,TextureSizeX,Textur eSizeY,0,GL_RGBA,GL_UNSIGNED_BYTE,textureimage);
}
}

// In either case, continue with your additional setup for the
// texture ie: glTexParameteri() calls etc.

//NOTE:
// You must not use a texture larger than your video card can handle.
// The way to get that value is simply:
GLint MaxTextureSize;
glGetIntegerv(GL_MAX_TEXTURE_SIZE,&MaxTextureSize);
// The value is the maximum size for either dimension

fortikur
01-17-2003, 04:12 AM
Thank You! Now everything works fine.
About Your question:
- I've been programming in C for years, but according to our books in univ char p[10] was exactly like a "new"-ed "char *p", at least it consums the same quantity of memory. (which is not true - I see now)
I think my real problem with this stuff was that the texture had to be stored "linearly" and not in a "2D" or "3D" block. Now, that I store the pixels linearly it works with everything.
About the sites:
- I know some good programming sites, but they are only in Hungarian - sorry.

Thank You anyway!