drawing/mapping textures

I am trying to draw a texture using openGL and c++ and am having some problems. Interestingly, it works using Visual Studio on windows, but does not work on a sun solaris unix machine(has anyone encountered this)? both versions compile, but when I run the unix one, I get an image that looks nothing like what it should. I use ImageMagick to read the image file, and then do:

glGenTextures(1,texture);
// set current texture
glBindTexture(GL_TEXTURE_2D,texture[0]);
glPixelStoref(GL_UNPACK_ALIGNMENT, 1);
// filter used to resample texture when the texture pixel screen size is smaller than pixel
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR );
// filter used to resample texture when the texture pixel screen size is larger than pixel
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
// how the texture behaves when the texture coordinates are outside 0…1 range
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP );
glTexEnvf( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL);

 gluBuild2DMipmaps(
      GL_TEXTURE_2D, 
  // 0, // resolution level 0 = base image, other values used only when defining mipmaps manually 
   GL_RGBA, // internal format for storing texture; keep RGBA
   image[i].columns(), // must be power of 2 + 2*border  
   image[i].rows(), // must be power of 2 (the power may be different)
  //0, // border 
   GL_BGRA_EXT, // format in which pixels are stored in ImageMagick 
   GL_UNSIGNED_BYTE, // 8 bits per color per pixel as used in ImageMagick
  image[0].getPixels(0,0,image[0].columns(),image[0].rows() ) // pointer to image data
  );

}

and then in my Draw I do:

glClearColor( 0, 0,0,1);
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glDisable(GL_BLEND);
glDisable(GL_ALPHA);

// displaying images using texture mapping
// the image scales and moves with the polygon;
// if the window is resized, the image is resized
// draw a texture mapped rectangle with animated texture
glPushMatrix();
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,texture[0]);
glTranslatef(0,0,0);
glScalef(3,3,1);
glColor3f(0,0,1);
glBegin(GL_POLYGON);
glTexCoord2f(0,0);
glVertex2f(0,0);
glTexCoord2f(1,0);
glVertex2f(1,0);
glTexCoord2f(1,1);
glVertex2f(1,1);
glTexCoord2f(0,1);
glVertex2f(0,1);
glEnd();
glDisable(GL_TEXTURE_2D);
glPopMatrix();

glutSwapBuffers();

I have also tried using glTexImage2D instead of gluBuild2DMipmaps but both produce the same result.

Like I mentioned before, it draws a picture that is nothing like the image that I use ImageMagick to read.

Thanks, any help is really appreciated,
-Josh

Is this a Solaris Sparc machine?

Are you familiar with the Big Endian vs Little Endian problem?

The byte order of the image files you are loading are not guarenteed to be the same on different operating systems. The x86 windows typed platfrom uses Little Endian structure generally. This is sort of the less natural way of expecting it to be stored. For a multi byte primitive for example. Each byte is stored from least significant to most significant. There is a wikipeadia entry about big endian vs. little endian which is probably going to explain this better than me but I will try.

From left to right your bytes when stuck together to form a larger structure like and int would be 1,2,3,4. The little endian structure however would store these as 4,3,2,1 in memory. Big endian does it as you would expect 1,2,3,4. Because the individual bytes can be in the wrong order on a different platform you would probably have to do some byte swapping to make them appear in the same order.

Generally Spark, PowerPC, most Risc systems and Intel Itanium to mention a few, platforms use Big endian structure where the humble x86 uses little endian.

I have not had to do this myself or I would give you more guidence. Perhaps someone else could give you more hands on help.

My OpenGL Superbible has this:

// Do byte swap for big vs little endian
#ifdef APPLE
BYTE_SWAP(tgaHeader.colorMapStart);
BYTE_SWAP(tgaHeader.colorMapLength);
BYTE_SWAP(tgaHeader.xstart);
BYTE_SWAP(tgaHeader.ystart);
BYTE_SWAP(tgaHeader.width);
BYTE_SWAP(tgaHeader.height);
#endif

The Byte swap function it uses is:

///////////////////////////////////////////////////////
// Macros for big/little endian happiness
#define BYTE_SWAP(x) x = ((x) >> 8) + ((x) << 8)