GL_UNSIGNED_INT_8_8_8_8 != GL_UNSIGNED_BYTE ?

Hi,

Can someone please explain the difference between the two following lines :
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, data);
and :
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_BGRA, GL_UNSIGNED_INT_8_8_8_8, data);

I get different results while I thought it should be the same.
I guess I’m understanding something wrong in the spec, but what ?!

Thanks in advance,

Endianness. One of the most fun sources of trouble of our time
Try GL_UNSIGNED_INT_8_8_8_8_REV.

Doh ! So easy, yet so blind I was.
Thanks a lot !

Am I correct in assuming that GL_UNSIGNED_INT_8_8_8_8_REV & GL_UNSIGNED_INT_8_8_8_8 are both undefined on Windows? I actually have a case where I have to deal with image data in ARGB. I’ve tried both (using values from on non-windows gl.h) and they both behave like UNSIGNED_BYTE

Also, I geuss this is related, is there a way to force a channel bitdepth on BGRA_EXT as in GL_RGB8. There were a few recent ATI drivers that defaulted to 4bits per channel on textures defined as BGRA_EXT. I’m happy to note that the most recent ATI drivers no longer does this…

Windows has nothing to do with this. If you don’t have the enums, you can pick them out of the BGRA_EXT extension spec and add them manually, or preferably just download a current glext.h . As you’ve already done this, and it still doesn’t seem to work, let me reassure you that ATI’s current drivers do support this (and have done so for as long as I can remember).

Should look something like

glTexImage2D(GL_TEXTURE_2D,level,GL_RGBA,width,height,
    [b]GL_RGBA[/b],[b]GL_UNSIGNED_INT_8_8_8_8_REV[/b],data);

Sizing the internal format is a seperate issue. Bits per channel is controlled by the “internal format” argument to glTexImage2D and friends. The in-app-memory “format” and “type” arguments just specify data layout for pulling the image data.

relevant snippet:

glTexImage2D(GL_TEXTURE_2D,level,[b]GL_RGBA8[/b],width,height,
    [b]GL_BGRA_EXT[/b],GL_UNSIGNED_BYTE,data);

The “internal format” of RGBA8 doesn’t imply that the implementation stores texture data in RGBA order. It may in fact be BGRA. For all intents and purposes, you really don’t need to know.

hmm well I have glext.h I’ve tried both GL_UNSIGNED_INT_8_8_8_8_REV and GL_UNSIGNED_INT_8_8_8_8 as you described (except that it’s in a glReadPixel call) on a Mac the different channels produce reversed and regular ordered channels as expected but on windows both ATI & NVidia they are the same… I must be doing something stupid if you’re sure they infact work.

I don’t think you followed my other point. If you define a texture as GL_RGBA with the 2 previous ATI drivers you will see your 32 bit textures with 16bit artifacts. (On their other drivers you can fix this by setting “texture quality to high” in the display properties but on those drivers this doesn’t work). I was told to use GL_RGBA8 to explicitally set the internal texture representation rather than rely on the default internal representation you get with just GL_RGBA. The problem is our data is in GL_BGRA and with that channel ordering I don’t appear to have any explicit way of setting the internal format?

I obviously don’t want to have 32bit textures truncated to 16bit by the driver.

Originally posted by vmh5:
hmm well I have glext.h I’ve tried both GL_UNSIGNED_INT_8_8_8_8_REV and GL_UNSIGNED_INT_8_8_8_8 as you described (except that it’s in a glReadPixel call) on a Mac the different channels produce reversed and regular ordered channels as expected but on windows both ATI & NVidia they are the same… I must be doing something stupid if you’re sure they infact work.

This seems like a bug. Might be related to the fact that CPUs of Mac systems use big endian, while x86 CPUs use little endian.

[b]I don’t think you followed my other point. If you define a texture as GL_RGBA with the 2 previous ATI drivers you will see your 32 bit textures with 16bit artifacts. (On their other drivers you can fix this by setting “texture quality to high” in the display properties but on those drivers this doesn’t work). I was told to use GL_RGBA8 to explicitally set the internal texture representation rather than rely on the default internal representation you get with just GL_RGBA. The problem is our data is in GL_BGRA and with that channel ordering I don’t appear to have any explicit way of setting the internal format?

I obviously don’t want to have 32bit textures truncated to 16bit by the driver.[/b]

There is no internal format GL_BGRA. And there’s no need for it. If your data is ordered BGRA, use GL_BGRA as the format parameter and GL_RGBA8 as the internal format parameter.