View Full Version : Pixel Component Order

09-16-2014, 03:53 PM
I'm using the LWJGL library to learn openGL. I am storing my pixel data in an IntBuffer. I made a call to glTexImage2D to map the pixels to a texture. I used the IntBuffer to make a checkerboard pattern of blues and white. I'm placing the following numbers for the components to represent blue (0, 0, 255, 255). When the texture was rendered, the checkerboard was a yellow instead of blue. I then tested all the components individually and found out the representation is in the order of ABGR instead of RGBA as I had thought.

I was following a tutorial that used the order or RGBA. It is a glut tutorial.

Here is the placement of pixel data

if ((i / 128 & 16 ^ i % 128 & 16) > 0) {
checkerBoard.put(i, 255);
checkerBoard.put(i, (checkerBoard.get(i) << 8) + 255);
checkerBoard.put(i, (checkerBoard.get(i) << 8) + 255);
checkerBoard.put(i, (checkerBoard.get(i) << 8) + 255);
} else {
checkerBoard.put(i, 0);
checkerBoard.put(i, (checkerBoard.get(i) << 8) + 0);
checkerBoard.put(i, (checkerBoard.get(i) << 8) + 255);
checkerBoard.put(i, (checkerBoard.get(i) << 8) + 255);

Here is my call to glTexImage2D

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixels);

It is another class and checkerBoard is referenced by pixels.

Am I not thinking right or is this a system specific big/little endian scenario?

Ed Daenar
09-16-2014, 04:13 PM
Without being even remotely familiar with Java, I can give you some pointers. Integers in Java are 32bit but you are telling GL that you are uploading GL_RGBA formated data as GL_UNSIGNED_BYTEs, thus 8bit packed data.

Long story short: try using a ByteBuffer instead when creating your data.

09-20-2014, 05:59 AM
The library I'm using works fine with the GL_UNSIGNED_BYTES. I already did the ByteBuffer and it worked fine. The IntBuffer also worked fine but in reverse order. It's the way each bit is pulled from the integer that was the issue. It ends up pulling each component out from the least significant 8 bits to the most significant. This causes the order to be reverse of what I thought it would do. In the end, I just wanted to know why this way wasn't working. Now I know and have come up with a couple of alternative way to make the changes look more logical.

09-21-2014, 04:50 AM
Been a while since i've used Java, but I think Java uses big-endian in the JVM, and all those shifts in that tutorial code probably assume a little-endian representation.