Bitmap files data - cannot be evaluated

I am trying to add texture to a glutSolidDodecahedron. Currently experiencing and access violation in the glTexImage2D. I think I am either not reading the bitmaps in correctly or perhaps they have to be translated somehow instead of sent directly to the image pointers…
System information:
Windows 2000 Prof
Visual C++ 6.0
This is an addition to existing code that works correctly. In other words I know all the OpenGL is properly configured with the C++ IDE.
Here is the relavent code

#include <GL/glut.h>
#include <gl\glaux.h>
#include <math.h>
#include <stdio.h>
#include <stdlib.h>

GLUquadricObj *p;
AUX_RGBImageRec *Img[12];
unsigned int ImgText[12];

void display() {
p = gluNewQuadric();
gluQuadricNormals(p, GLU_SMOOTH);
gluQuadricTexture(p, GL_TRUE);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glOrtho(-20,20,-20,20,-20,20);

glTranslatef(0,0,-2.75);
glBindTexture(GL_TEXTURE_2D, ImgText[3]);
glutSolidDodecahedron();


glutSwapBuffers();
glFlush();
}

void getImage()
{
char *Names[12];
Names[0]=“Img/back.bmp”;
Names[1]=“Img/blue.bmp”;
Names[2]=“Img/Borg.bmp”;
Names[3]=“Img/Carpet.bmp”;
Names[4]=“Img/goldscales.bmp”;
Names[5]=“Img/Grass.bmp”;
Names[6]=“Img/Grass2.bmp”;
Names[7]=“Img/Rock.bmp”;
Names[8]=“Img/Skin.bmp”;
Names[9]=“Img/Sky.bmp”;
Names[10]=“Img/water.bmp”;
Names[11]=“Img/White.bmp”;
Names[12]=“Img/Wood.bmp”;

for (int i = 0;i<13;i++) {
Img[i] = auxDIBImageLoad(Names[i]);
glGenTextures(13, &ImgText[2]);
glBindTexture(GL_TEXTURE_2D, ImgText[i]);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);

//ERROR OCCURS HERE —>
glTexImage2D(GL_TEXTURE_2D, 0, 3, Img[i]->sizeX, Img[i]->sizeY, 0, GL_RGB, GL_UNSIGNED_BYTE, Img[i]->data);
}
}

void init() {
glClearColor(0.0,0.0,0.0,0.0);
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
glEnable(GL_TEXTURE_2D);
glShadeModel(GL_SMOOTH);
getImage();
}

I have tested to make sure it wasn’t an array index out of bounds.
The debug information looks like this:

  • Img[i] 0x0000000b
    sizeX CXX0030: Error: expression cannot be evaluated
    sizeY CXX0030: Error: expression cannot be evaluated
    data CXX0030: Error: expression cannot be evaluated

It seems that it’s an array out of bounds.

Change this:

AUX_RGBImageRec *Img[12];
unsigned int ImgText[12];

into

AUX_RGBImageRec *Img[13];
unsigned int ImgText[13];

and in getImage():

char *Names[13];

If you declare an array of i elements, then you can go from array[0] to array[i-1].

So, if you need to go from array[0] to array[12], array must be declared as
<type> array[13];

Hope this helps…

Sorry, but I noticed this:

for (int i = 0;i<13;i++) {
Img[i] = auxDIBImageLoad(Names[i]);
glGenTextures(13, &ImgText[2]);
glBindTexture(GL_TEXTURE_2D, ImgText[i]);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
//ERROR OCCURS HERE —>
glTexImage2D(GL_TEXTURE_2D, 0, 3, Img[i]->sizeX, Img[i]->sizeY, 0, GL_RGB, GL_UNSIGNED_BYTE, Img[i]->data);
}

This line

	glGenTextures(13, &ImgText[2]);

You should delete it and write this out of the loop:

glGenTextures(13, &ImgText[0]);

or write this inside the loop:

glGenTextures(1, &ImgText[i]);

that’s because, in your code, every time the loop is executed, you try to allocate the space for 13 “textures” in ImgText starting from the 3rd position of your array (that has only 12 positions)…
Good luck!

Well I moved some stuff around. I found out it didn’t like the [0] element either so I made the arrays size 14 and use 1-13.
I also noticed the problem with glGenTextures(13, &ImgText[12]); and corrected it.
the new code looks more like this:

AUX_RGBImageRec *getImage(int i)
{
Img = auxDIBImageLoad(Names[i]);
glGenTextures(1, &ImgText[0]);
glBindTexture(GL_TEXTURE_2D, ImgText[0]);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, 3, Img->sizeX, Img->sizeY, 0, GL_RGB, GL_UNSIGNED_BYTE, Img->data);
return Img;
}

void ClearTexture() //Added this but it doesn’t help
{ //Memory is food for froo and he eats like a fat guy at a buffet!!
if (Img) // If Texture Exists
{
if (Img->data) // If Texture Image Exists
{
free(Img->data); // Free The Texture Image Memory
}

  free(Img);							// Free The Image Structure

}
}

glTranslatef(0,0,-2.75);
glShadeModel(GL_SMOOTH);
getImage(3);
glutSolidDodecahedron();
ClearTexture();

This works but it turns out to be a huge memory hog. When I change the camera position or just about anytime the display is re-rendered it starts sucking up memory.
Will probably turn in what I have for the class, it does meet the specs, But I would like to know what I could do to plug these memory leaks.

Mike,
in order to use the glaux.h include you have to add the glaux.lib to the project settings in Visual C++.
Project->settings->link->object/library modules—add glaux.lib

if you don’t it will give you a link error when you try to compile

Originally posted by Sampeorna:
[b]

[quote]

glTranslatef(0,0,-2.75);
glShadeModel(GL_SMOOTH);
getImage(3);
glutSolidDodecahedron();
ClearTexture();

[/b][/QUOTE]

Is this stuff inside your drawing routine? If it’s so, it leaks a lot of memory just because you’re loading the texture each time you draw the scene!

Try to do all the getImage() you need to load the textures during initialization, then in your drawing routine, before calling glutSolidDodecahedron(), simply bind the texture by its number…

I suggest you to make these changes to your code:

//first of all you need an ImgText array declared this way:
GLuint ImgText[14]; //since you want to use 1-13 positions

//…

AUX_RGBImageRec *getImage(int i)
{
Img = auxDIBImageLoad(Names[i]);
glGenTextures(1, &ImgText[i]);
glBindTexture(GL_TEXTURE_2D, ImgText[i]);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, 3, Img->sizeX, Img->sizeY, 0, GL_RGB, GL_UNSIGNED_BYTE, Img->data);
return Img;
}

//call getImage once per texture in your init function

//then in your drawing routine
glTranslatef(0,0,-2.75);
glShadeModel(GL_SMOOTH);
glBindTexture(GL_TEXTURE_2D,ImgText[3]; //instead of getImage(3)
glutSolidDodecahedron();

Doing so you load your textures once when your program starts, then you simply bind them when you have to use them.

*Note: if my english sucks, it’s not my fault.