PDA

View Full Version : GL_UNSIGNED_INT_8_8_8_8 != GL_UNSIGNED_BYTE ?



vincoof
09-04-2003, 04:17 AM
Hi,

Can someone please explain the difference between the two following lines :
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, data);
and :
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_BGRA, GL_UNSIGNED_INT_8_8_8_8, data);

I get different results while I thought it should be the same.
I guess I'm understanding something wrong in the spec, but what ?!

Thanks in advance,

zeckensack
09-04-2003, 04:40 AM
Endianness. One of the most fun sources of trouble of our time http://www.opengl.org/discussion_boards/ubb/wink.gif
Try GL_UNSIGNED_INT_8_8_8_8_REV.

vincoof
09-04-2003, 05:02 AM
Doh ! So easy, yet so blind I was.
Thanks a lot !

vmh5
10-31-2003, 04:55 AM
Am I correct in assuming that GL_UNSIGNED_INT_8_8_8_8_REV & GL_UNSIGNED_INT_8_8_8_8 are both undefined on Windows? I actually have a case where I have to deal with image data in ARGB. I've tried both (using values from on non-windows gl.h) and they both behave like UNSIGNED_BYTE

Also, I geuss this is related, is there a way to force a channel bitdepth on BGRA_EXT as in GL_RGB8. There were a few recent ATI drivers that defaulted to 4bits per channel on textures defined as BGRA_EXT. I'm happy to note that the most recent ATI drivers no longer does this...

zeckensack
10-31-2003, 05:22 AM
Windows has nothing to do with this. If you don't have the enums, you can pick them out of the BGRA_EXT extension spec and add them manually, or preferably just download a current glext.h (http://oss.sgi.com/projects/ogl-sample/registry/) . As you've already done this, and it still doesn't seem to work, let me reassure you that ATI's current drivers do support this (and have done so for as long as I can remember).

Should look something like


glTexImage2D(GL_TEXTURE_2D,level,GL_RGBA,width,hei ght,
GL_RGBA,GL_UNSIGNED_INT_8_8_8_8_REV,data);

Sizing the internal format is a seperate issue. Bits per channel is controlled by the "internal format" argument to glTexImage2D and friends. The in-app-memory "format" and "type" arguments just specify data layout for pulling the image data.

relevant snippet:


glTexImage2D(GL_TEXTURE_2D,level,GL_RGBA8,width,he ight,
GL_BGRA_EXT,GL_UNSIGNED_BYTE,data);

The "internal format" of RGBA8 doesn't imply that the implementation stores texture data in RGBA order. It may in fact be BGRA. For all intents and purposes, you really don't need to know.

vmh5
10-31-2003, 07:14 AM
hmm well I have glext.h I've tried both GL_UNSIGNED_INT_8_8_8_8_REV and GL_UNSIGNED_INT_8_8_8_8 as you described (except that it's in a glReadPixel call) on a Mac the different channels produce reversed and regular ordered channels as expected but on windows both ATI & NVidia they are the same.... I must be doing something stupid if you're sure they infact work.

I don't think you followed my other point. If you define a texture as GL_RGBA with the 2 previous ATI drivers you will see your 32 bit textures with 16bit artifacts. (On their other drivers you can fix this by setting "texture quality to high" in the display properties but on those drivers this doesn't work). I was told to use GL_RGBA8 to explicitally set the internal texture representation rather than rely on the default internal representation you get with just GL_RGBA. The problem is our data is in GL_BGRA and with that channel ordering I don't appear to have any explicit way of setting the internal format?

I obviously don't want to have 32bit textures truncated to 16bit by the driver.

Xmas
10-31-2003, 08:38 PM
Originally posted by vmh5:
hmm well I have glext.h I've tried both GL_UNSIGNED_INT_8_8_8_8_REV and GL_UNSIGNED_INT_8_8_8_8 as you described (except that it's in a glReadPixel call) on a Mac the different channels produce reversed and regular ordered channels as expected but on windows both ATI & NVidia they are the same.... I must be doing something stupid if you're sure they infact work.
This seems like a bug. Might be related to the fact that CPUs of Mac systems use big endian, while x86 CPUs use little endian.


I don't think you followed my other point. If you define a texture as GL_RGBA with the 2 previous ATI drivers you will see your 32 bit textures with 16bit artifacts. (On their other drivers you can fix this by setting "texture quality to high" in the display properties but on those drivers this doesn't work). I was told to use GL_RGBA8 to explicitally set the internal texture representation rather than rely on the default internal representation you get with just GL_RGBA. The problem is our data is in GL_BGRA and with that channel ordering I don't appear to have any explicit way of setting the internal format?

I obviously don't want to have 32bit textures truncated to 16bit by the driver.
There is no internal format GL_BGRA. And there's no need for it. If your data is ordered BGRA, use GL_BGRA as the format parameter and GL_RGBA8 as the internal format parameter.