PDA

View Full Version : texture mapping segfault :(



jkolb
07-14-2002, 05:25 PM
i have a bmp (it's a strip) that i want to break up into smaller 32x32 chunks and store each chunk in an array space. for some reason i get a segfault on my glTexImage2D() calls. im using x 4.2, sdl, and gcc 3.1. any ideas? here's my function:

bool LoadTileSet(unsigned int setid)
{
// NOTE: setid is unused at the moment, use when multiple sets exist.
// eventually gonna use a tile set class or something like that.

SDL_Surface *TextureImage = NULL;
SDL_Surface *TextureImageTemp = NULL;
SDL_Rect src, dest;

TextureImage = SDL_LoadBMP("tiles.bmp");
if (TextureImage == NULL)
return false;

if (background_tiles)
glDeleteTextures(MAX_TILES, background_tiles);

glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glGenTextures(MAX_TILES, &background_tiles[0]);

for (int i = 0; i < MAX_TILES; i++) {
src.x = i * 32;
src.y = 0;
src.w = 32;
src.h = TextureImage->h;

dest.x = 0;
dest.y = 0;
dest.w = 32;
dest.h = TextureImage->h;

SDL_BlitSurface(TextureImage, &src, TextureImageTemp, &dest);

glBindTexture(GL_TEXTURE_2D, background_tiles[i]);

glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTE R, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTE R, GL_LINEAR);

// ***bombs here: ***
if (TextureImageTemp->format->BytesPerPixel == 3)
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 32, 32, 0,
GL_RGB, GL_UNSIGNED_BYTE, TextureImageTemp->pixels);
else
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 32, 32, 0,
GL_RGBA, GL_UNSIGNED_BYTE, TextureImageTemp->pixels);
}

glEnable(GL_TEXTURE_2D);

SDL_FreeSurface(TextureImage);
SDL_FreeSurface(TextureImageTemp);

return true;
}

rts
07-14-2002, 07:34 PM
Uh... it bombs right there because TextureImageTemp is NULL.

So, like, make it non-NULL, or something.

Husted
07-14-2002, 10:17 PM
Hi,

You don't really need to blit your image data to a temp buffer. You can just setup the OpenGL pixel transfer pipeline to read the correct sub-image.

-- Niels

jkolb
07-15-2002, 07:41 AM
thank you. how do i change the pipeline so that i don't need the temp variable?

Jeremy

Husted
07-15-2002, 11:23 PM
Hi,

Sorry for the late reply... Take a look at glPixelStorei ... Especially: GL_UNPACK_ROW_LENGTH, GL_UNPACK_IMAGE_HEIGHT, GL_UNPACK_SKIP_PIXELS, and GL_UNPACK_SKIP_ROWS. You code should look something like this:

glPixelStorei (GL_UNPACK_SWAP_BYTES, GL_FALSE);
glPixelStorei (GL_UNPACK_LSB_FIRST, GL_FALSE);
glPixelStorei (GL_UNPACK_ALIGNMENT, 1);
glPixelStorei (GL_UNPACK_ROW_LENGTH, TextureImage->w);
glPixelStorei (GL_UNPACK_IMAGE_HEIGHT, TextureImage->h);
glPixelStorei (GL_UNPACK_SKIP_PIXELS, i*32);
glPixelStorei (GL_UNPACK_SKIP_ROWS, 0);

glBindTexture (GL_TEXTURE_2D, background_tiles[i]);

glTexImage2D (GL_TEXTURE_2D, 0, GL_RGB, 32, 32, 0, GL_RGB, GL_UNSIGNED_BYTE, TextureImage->pixels);

You might want to store/restore the pixel unpack before and after...

-- Niels

jkolb
07-16-2002, 05:05 PM
okay, now i'm getting somewhere. however it seems that my array is being filled by the first texture only. any idea's why this is? my loading loop is as follows:

for (int i = 0; i < MAX_TILES; ++i) {

glPixelStorei(GL_UNPACK_SWAP_BYTES, GL_FALSE);
glPixelStorei(GL_UNPACK_LSB_FIRST, GL_FALSE);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glPixelStorei(GL_UNPACK_ROW_LENGTH, TextureImage->w);
glPixelStorei(GL_UNPACK_IMAGE_HEIGHT, TextureImage->h);
glPixelStorei(GL_UNPACK_SKIP_PIXELS, i * 32);
glPixelStorei(GL_UNPACK_SKIP_ROWS, 0);

glBindTexture(GL_TEXTURE_2D, background_tiles[i]);

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 32, 32, 0,
GL_RGB, GL_UNSIGNED_BYTE, TextureImage->pixels);

}

coelurus
07-16-2002, 10:22 PM
I see that you skip i * 32 pixels. If I'm not mistaken, the bitmaps are 32x32, and you should therefore skip i*32*32. You could also increment the bitmap pointer directly:

glTexImage2D(...TextureImage->pixels + i*32*32*3);

Husted
07-17-2002, 01:14 AM
Originally posted by coelurus:
I see that you skip i * 32 pixels. If I'm not mistaken, the bitmaps are 32x32, and you should therefore skip i*32*32. You could also increment the bitmap pointer directly:

glTexImage2D(...TextureImage->pixels + i*32*32*3);

I skip i*32 pixels in the x direction (GL_UNPACK_SKIP_PIXELS), and 0 pixels in the y direction (GL_UNPACK_SKIP_ROWS).

-- Niels

jkolb
07-17-2002, 03:19 AM
neither of these work. i don't understand why. i did find however that it i changeed the unpack pixel rows to i* 32 * 2 that all of the array elements fill up with the second texture. but this does not really help me. i don't see what the problem is at all. maybe i is not being incremented properly or something.

Jeremy

coelurus
07-17-2002, 03:40 AM
Hmm, don't really know how the skips work...
How about loading the entire texture-strip at once, then break it down when rendering (since the texture-dimensions will still be 2^n). This also keeps the tiles in one texture, and you skip a lot of binds.