Difference between INT_8_8_8_8 and BYTE in texture

Hello,

I’m trying to use some textures with OpenGL and I’m asking the difference between the type GL_UNSIGNED_INT_8_8_8_8 and GL_UNSIGNED_BYTE in case of I’ve got a buffer organized in RGBA in bytes.

A priori these two types give the same result but with GL_UNSIGNED_INT_8_8_8_8, texture are almost red and with GL_UNSIGNED_BYTE the textured are well displayed. I understand that with GL_UNSIGNED_INT_8_8_8_8, OpenGL interprets my buffer in ABGR but I don’t knw why…
I search some documentation on the two types but I didn’t found this difference…
Do someone have an explanation ?

Thanks in advance !

As far as I know there IS a big difference between GL_UNSIGNED_INT_8_8_8_8 and GL_UNSIGNED_BYTE. For texture storage the first means that the texel values are actual integers and in the other case the texel values are floating point values from the interval [0,1], just the internal storage is fixed point. They get converted to normalized float when you access the texture.

Probably I’m wrong but I think this is true.

The swizzling for those types which actually refer to the representation order usually assume ABGR component ordering that’s why you get this results.

AFAIK there is no difference. This is just one of many packed formats. This is just for completeness.

Both types store 8 bits per channel:
RGBA, UNSIGNED_BYTE reads four bytes.
RGBA UNSIGNED_INT_8_8_8_8 reads one int.

The difference is that reading an int from memory depends on the endianness of your platform. You will get different results between i.e. PPC or Intel machines.

Both types are interpreted according to the format argument:

RGBA treats those 8 bits as unsigned normalized data, resulting in [0…1] during sampling.
RGBA_INTEGER treats those 8 bits as unsigned integer data, resulting in [0…255] during sampling (with a usampler in GLSL.)

Thank you for your responses !

I’m currently using OpenGL on Windows so with an Intel machine, I think the endianness may be the same between the two types.
What’s more, I’ve not a shader system yet so I use the fix function pipeline, maybe it change the last explanation from arekkusu.

I still haven’t understood why OpenGL revert the channel order with UNSIGNED_INT_8_8_8_8 when I put the RGBA format.

Intel is little endian. Loading “RGBA” as an int from memory will result in “ABGR” in a register. Use your debugger to look at the data you’re passing to glTexImage to understand it.

In fixed function, all core texture formats will be treated as unsigned normalized data [0…1].

OpenGL’s special type identifiers (like UNSIGNED_INT_8_8_8_8) describe the bit layout from most significant bit to least significant bit, from left to right. These correspond to the channels in the format, from left to right.

I.e. the combination of type UNSIGNED_INT_8_8_8_8 and format RGBA means that the 8 most significant bits represent the red channel, and the 8 least significant bits represent the alpha channel.

However, on little endian machines (e.g. x86) the least significant byte in an integer is stored at the lowest memory address. Thus for the above example, when treating the data as bytes, the alpha channel comes first and the red channel last.

Thank you for these last descriptions. I understand the mistakes I’ve made.

Last point: in doc, I’ve found there is a UNSIGNED_INT_8_8_8_8_REV format for the case I want a buffer stored in INT with RGBA. Do the performances decrease if we use a REV format or will they be equals to a non-REV format?
I can prepare the texture buffer in the reverse mode if the performances are better.