Problem with function paramneteres

I have following source code for loading tga files(from nehe) . But when i add some more parameter to the function so the first lines about texture(->width = …) raises exeption EAccesViolation, JUST when i add the parameter so i have no idea whats wrong. Can somebody please help me ?

typedef struct // Create a structure
{
GLubyte *imageData; // Image data (Up to 32 bits)
GLuint bpp; // Image color depth in bits per pixel.
GLuint width; // Image width
GLuint height; // Image height
GLuint texID; // Texture ID used to select a texture
} TextureImage;

bool TEngine::MakeTGATexture(const char *Dir)
{
TextureImage *texture;
GLubyte TGAheader[12]={0,0,2,0,0,0,0,0,0,0,0,0}; // Uncompressed TGA header
GLubyte TGAcompare[12]; // Used to compare TGA header
GLubyte header[6]; // First 6 useful bytes from the header
GLuint bytesPerPixel; // Holds number of bytes per pixel used in the TGA file
GLuint imageSize; // Used to store the image size when setting aside ram
GLuint temp; // Temporary variable
GLuint type=GL_RGBA; // Set the default GL mode to RBGA (32 BPP)

FILE *file = fopen(Dir, "rb");	// Open the TGA file

if(	file==NULL | |								// Does file even exist?
	fread(TGAcompare,1,sizeof(TGAcompare),file)!=sizeof(TGAcompare) | |	// Are there 12 bytes to read?
	memcmp(TGAheader,TGAcompare,sizeof(TGAheader))!=0 | |                    // Does the header match what we want?
	fread(header,1,sizeof(header),file)!=sizeof(header))			// If so read next 6 header bytes
{
	if (file == NULL)			// Did the file even exist? *Added Jim Strong*
		return FALSE;			// Return false
	else					// Otherwise
	{
		fclose(file);	// If anything failed, close the file
		return FALSE;	// Return false
	}
}

texture->width  = header[1] * 256 + header[0];	// Determine the TGA width	(highbyte*256+lowbyte)
texture->height = header[3] * 256 + header[2];	// Determine the TGA height	(highbyte*256+lowbyte)

if(	texture->width	<=0	| |		// Is the width less than or equal to zero
	texture->height	<=0	| |		// Is the height less than or equal to zero
	(header[4]!=24 && header[4]!=32))	// Is the TGA 24 or 32 bit?
{
	fclose(file);				// If anything failed, close the file
	return FALSE;				// Return false
}

texture->bpp	= header[4];			// Grab the TGA's bits per pixel (24 or 32)
bytesPerPixel	= texture->bpp/8;		// Divide by 8 to get the bytes per pixel
imageSize		= texture->width*texture->height*bytesPerPixel;	// Calculate the memory required for the TGA data

texture->imageData=(GLubyte *)malloc(imageSize);			// Reserve memory to hold the TGA data

if(	texture->imageData==NULL | |					// Does the storage memory exist?
	fread(texture->imageData, 1, imageSize, file)!=imageSize)	// Does the image size match the memory reserved?
{
	if(texture->imageData!=NULL)				// Was image data loaded
		free(texture->imageData);			// If so, release the image data

	fclose(file);						// Close the file
	return FALSE;						// Return false
}

for(GLuint i=0; i<int(imageSize); i+=bytesPerPixel)		// Loop through the image data
{								// Swaps the 1st and 3rd bytes ('R'ed and 'B'lue)
	temp=texture->imageData[i];				// Temporarily store the value at image data 'i'
	texture->imageData[i] = texture->imageData[i + 2];	// Set the 1st byte to the value of the 3rd byte
	texture->imageData[i + 2] = temp;			// Set the 3rd byte to the value in 'temp' (1st byte value)
}

fclose (file);					// Close the file

// Build a texture from the data
glGenTextures(1, &TextureBin[5]);		// Generate OpenGL texture IDs

glBindTexture(GL_TEXTURE_2D, TextureBin[5]);				// Bind our texture
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);	// Linear filtered
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);	// Linear filtered

if (texture[0].bpp==24)		// Was the TGA 24 bits
{
	type=GL_RGB;		// If so set the 'type' to GL_RGB
}

glTexImage2D(GL_TEXTURE_2D, 0, type, texture[0].width, texture[0].height, 0, type, GL_UNSIGNED_BYTE, texture[0].imageData);

return true;		// Texture building went ok, return true

}

I think your problem is that you don’t read tga images correctly.

First of all, the header consists in 9 WORDS (that’s 18 bytes).
For truecolor images the second WORD should be 2

unsigned short tgaHead[9];
memset(tgaHead,0,18);
tgaHead[1] = 2;
tgaHead[6] = image_width;
tgaHead[7] = image_height;
tgaHead[8] = image_bpp;

Second, you should use the GL_BGRA / GL_BGR texture format to let GL swap the Blue and Red components.

If you need a image loading library, you can find one in the downloads section of my website. That’s http://nervus.go.ro

Have fun.

Second, you should use the GL_BGRA / GL_BGR texture format to let GL swap the Blue and Red components.

I think that’s a bad idea. If you want your images to render with the proper colors no matter what video card, then I’d swap the colors at load time. It’s not any slower swapping if you do it right.

Originally posted by WhatEver:
I think that’s a bad idea. If you want your images to render with the proper colors no matter what video card, then I’d swap the colors at load time. It’s not any slower swapping if you do it right.

What’s wrong with using the BGR extension? Almost every card in existence supports it (I haven’t found one that doesn’t anyway), even the OpenGL 1.1 software renderer that comes with Windows supports it.

If the BGR feature was native then I wouldn’t have a problem with it, but it’s not, it’s an extension.

Converting the colors to RGB at load time ensures no problems with any video card in existance.

But it works OK, i have just problem when i add some more parameter to the funstion.

I dont know if you’ve fixed your problem yet, but consider using DevIL, which supports a large variety of image formats.

It also has a opengl syntax!

Where can i find it plus how to use it.

Originally posted by WhatEver:
[b]If the BGR feature was native then I wouldn’t have a problem with it, but it’s not, it’s an extension.

Converting the colors to RGB at load time ensures no problems with any video card in existance.[/b]

What’s wrong with extensions?

GL_EXT_bgr is part of the OpenGL 1.2.1 spec, BTW.

The BGR extension is an old pet peeve of mine. When I first started coding with OpenGL 3 years ago, I ran my game on a Gateway computer and a laptop, neither of them had the BGR extension. I didn’t even know what an extension was for that matter so I hadn’t a clue why the colors were all screwed up. I had sent that very same game to a game developer and he asked me why the colors were all screwed up and I couldn’t tell him.

Later I found out it was because that BGR extension was not supported.

When OpenGL 2.0 comes out the whole BGR thing won’t bother me any more .

Anyway, it’s not that big a deal. It was just a suggestion and not something I meant to discuss for this long.

BGR is native as of OpenGL 1.2.x and all cards I know about implement it fine.

There might still be some old cards that only support OpenGL 1.1, but those should be few and far between, and probably not all that significant to any new development effort.