Texture map Memory Management

I’m using this code, but it seems to be either leaking memory or overflowing it. it is in a for loop so it is used over and over.
Any ideas on what im doing wrong here?

switch/cases { card = tkRGBImageLoad(“Card052.sgi”);} // Create Texture GLuint CardTexture[1]; glPixelStorei(GL_UNPACK_ALIGNMENT, 1); glGenTextures(1, &CardTexture[0]); glBindTexture(GL_TEXTURE_2D, CardTexture[0]); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, 3, card->sizeX, card->sizeY,0, GL_RGB, GL_UNSIGNED_BYTE, card->data); glBegin(GL_QUADS); glTexCoord2f(0.421875f, 1.00f); glVertex3f(CARD_WIDTHXR, 0.75f, 0.0f); glTexCoord2f(0.00f, 1.00f); glVertex3f(0.00f, 0.75f, 0.0f); glTexCoord2f(0.00f, 0.4140625f); glVertex3f(0.00f, 0.00f, 0.0f); glTexCoord2f(0.421875f, 0.4140625f);
glVertex3f(CARD_WIDTH
XR, 0.00f, 00.0f); glEnd(); free(card->data); free(card); glDeleteTextures(1,CardTexture);

You may not have noticed that your code was mashed together. But here is what I was able to determine. You are calling glGenTextures, glBindTexture, and glDeleteTextures in the middle of a for loop. Since you are using the same exact texture each time, why not use texture objects the way they were intended.

That is, call glGenTextures (long before the loop), then load an bind the texture. Now, inside the loop, do your rendering, but call glBindTexture before you render (outside of the loop). And definately remove the glDeleteTextures.

If you are, even in the slightest bit, interested in performance, doing this will be a large boost. It will, quite likely, solve your memory leak/overflow problem.

Ok. I should have been a tad more clear.
There is actualy a switch statment with 52 different cases (one for each card in the deck). I only need one loaded at a time as I have other textures that I have constnatly in memeory. My main class calls the render method in a “card object” which is where this code is from.

Im not really 100% sure on how the binding and generation works, i took most of it from tutorials.

here is hopefully a better format of the code

switch/cases { card =tkRGBImageLoad(“Card052.sgi”);}

GLuint CardTexture[1]; glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glGenTextures(1, &CardTexture[0]);

glBindTexture(GL_TEXTURE_2D, CardTexture[0]);

glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);

glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);

glTexImage2D(GL_TEXTURE_2D, 0, 3, card->sizeX, card->sizeY,0, GL_RGB, GL_UNSIGNED_BYTE, card->data);

glBegin(GL_QUADS); glTexCoord2f(0.421875f, 1.00f); glVertex3f(CARD_WIDTH*XR, 0.75f, 0.0f);

glTexCoord2f(0.00f, 1.00f); glVertex3f(0.00f, 0.75f, 0.0f);

glTexCoord2f(0.00f, 0.4140625f); glVertex3f(0.00f, 0.00f, 0.0f);

glTexCoord2f(0.421875f, 0.4140625f);
glVertex3f(CARD_WIDTH*XR, 0.00f, 00.0f);

glEnd();

free(card->data); free(card); glDeleteTextures(1,CardTexture);

as far as the “for Loop”, that is in my main, which calls the render in the “card class” (a different card each time), and the card has a variable that is used in the switch to determine which card it gets.

Hope this helps you understand what im trying to get at.

OK, that’s not any better. You’re still generating, binding, and deleting the texture. Each of these operations has significant overhead (and the drivers might not handle literally hundreds of thousands of glGenTexture calls as well as you want).

Instead, generate 52 textures and load them up all before the loop. Then, render them based on the switch statement. Once you no longer need the textures, call glDeleteTextures. Like I said before, if you are even the slightest bit interested in performance, this will be of great help.

[This message has been edited by Korval (edited 05-30-2001).]

Originally posted by Korval:
[b]OK, that’s not any better. You’re still generating, binding, and deleting the texture. Each of these operations has significant overhead (and the drivers might not handle literally hundreds of thousands of glGenTexture calls as well as you want).

Instead, generate 52 textures and load them up all before the loop.

So when you generate them you are not actually storing them in the card, that waits until the render command?

Then, render them based on the switch statement. Once you no longer need the textures, call glDeleteTextures. Like I said before, if you are even the slightest bit interested in performance, this will be of great help.

Ok, that’s good to know as preformance here is quite importnat.

Thanks again for the help.

[This message has been edited by Korval (edited 05-30-2001).][/b]

hmm you should swap ghlTexImage and glTexparamter, I am not sure its necessary but assignging Paramters to an empty texobject feels not right

You might want to consider setting your TexEnv to Replace otherwise you still might get some weird results (because it is Modulate by default I think)

Chris

Ok, i’ve got it loading up all the textures.
But it always crashes loading up the textures.
It crashes after the 21st load.
It says “Unmapped Memory Exception”
images[0] = tkRGBImageLoad(“Spades.sgi”);
images[1] = tkRGBImageLoad(“Hearts.sgi”);images[2] = tkRGBImageLoad(“Diamonds.sgi”);images[3] = tkRGBImageLoad(“Clubs.sgi”);images[4] = tkRGBImageLoad(“Spades2.sgi”);images[5] = tkRGBImageLoad(“Hearts2.sgi”);images[6] = tkRGBImageLoad(“Diamonds2.sgi”);images[7] = tkRGBImageLoad(“Clubs2.sgi”);images[BAR] = tkRGBImageLoad(“bar.sgi”);// ~~~ Load the deck of cardsimages[9] = tkRGBImageLoad(“Card001.sgi”);…images[60] = tkRGBImageLoad(“Card052.sgi”);

sorry for the bad formatting, my browser or
myself is confused about how it works.
If you are wondering, here are the TK
code pieces that I’m using:

static TK_RGBImageRec *tkRGBImageLoad(const char *fileName)
{
rawImageRec *raw;
TK_RGBImageRec *final;

raw = RawImageOpen(fileName);
if (!raw) {
fprintf(stderr, "File not found
");
return NULL;
}
final = (TK_RGBImageRec *)malloc(sizeof(TK_RGBImageRec));
if (final == NULL) {
fprintf(stderr, "Out of memory!
");
return NULL;
}
final->sizeX = raw->sizeX;
final->sizeY = raw->sizeY;
final->components = raw->sizeZ;
RawImageGetData(raw, final);
RawImageClose(raw);
return final;
}

And

typedef struct _TK_RGBImageRec {
GLint sizeX, sizeY;
GLint components;
unsigned char *data;
} TK_RGBImageRec;

What is “TexEnv”?

http://www.eecs.tulane.edu/www/graphics/doc/OpenGL-Man-Pages/glTexEnv.html

Check this out. Unforunately its only OpenGL 1.1.

In 1.2 you have on more call. Its glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE).

This prevents, that your Polygon color modulates or blends with your texture-colors.

Default setting for TexEnv is GL_MODULATE

hope that helps
Chris

What functions take the textures from the
HD to the Ram, and from Ram to the Card?
I found out how much texture memory this grahics card has.
It seems to be that my crash is when i’ve already
tkRGBImageLoad(“Card001.sgi”); ed enough
so that that amount of memory on the card is
full. how do you make sure that something is
removed from the graphics card?

There are no OpenGL functions that load a texture from the harddrive. All OpenGL is concerned about is storing your texture when you call glTexImage2D.

You may want to tell us something about your computer’s setup. For instance, what kind of card do you have? Also, how big are these textures?