about 16bit texture

I am working with a 16bit texture stored in a specified file format. The pixels are stored one by one, not compressed. The bit assign is 5-5-5-1. but when I use glTexImage2D(), there is always an error.
Here is my source:
glTexImage2D(GL_TEXTURE_2D, 0 GL_RGB5_A1, 256, 256, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);
An illegal operaion error occur when this function runs.
I am not sure about the format and type parameter, should I set the type to GL_UNSIGNED_SHORT? I have tried, but the error remains.
Mycode works when operating 24b and 32bit texture with component set to 3 or 4.
I am using VC6.0, and the opengl library with this version of VC.
Can anyone help me?

Well, the third parameter (GL_RGB5_A1) is not describing the way your data is layed out, but the format opengl should convert you texture to. I guess you want a GL_RGBA here. And the seventh parameter (GL_RGBA) is where you should tell opengl how you texture is composed. so just try GL_RGB5_A1 here.

-Lev

P.S. if you look at glTexImage2D specs, then you can clearly see that the third parameter is “internal format” and the seventh “(your) format”.

You will also need to change GL_UNSIGNED_BYTE to GL_UNSIGNED_SHORT.

Lev: the internal format is what is actually used in video memory. It is the 3rd parameter. The format of the texture ( 7th argument ) is describing the format of the pixels array you’re giving by pointer to the function. Ie, the 3rd argument should really be GL_RGB5_A1 and the 7th one, GL_RGBA. I don’t see anything wrong with the call, Nil, maybe the internal format is not supported by your card ? I know this one isn’t working on mine.

Dfrey: no, it depends on how the pixels are packed in the “data” parameter. If you are using GL_UNSIGNED_BYTE, each pixel should be described with 4 ( for RGBA ) bytes. If you are using GL_UNSIGNED_SHORT, each pixel should be described with 4 shorts, or 8 bytes. I don’t think it is a commonly used feature, and probably not what Nil is using. I’d rather check if the pointer to the data is valid.

Y.

Nil, also posted this in the beginner board and as I replied to him there, the first thing that should be checked is that the pointer is valid. I’m surpised nobody else mentioned that before Ysaneya. Whenever I see exception errors, I always check to see if I’m potentially accessing memory in the wrong way before assuming that there is some weird error caused from the parameters. (I would think errors like that would result in an error code, not cause a crash.)

Ooops, I meant that it should have been GL_UNSIGNED_SHORT_5_5_5_1 or GL_UNSIGNED_SHORT_1_5_5_5_REV (depends how the components are stored in the file), that clearly indicates all components are packed into a single unsigned short.

[This message has been edited by DFrey (edited 01-17-2001).]

Well, I am using a G200, and I looked up the DirectX properties in my control panel, it seems my display card only support 16bit color with 5-6-5 bit assign. But I do not care about what kind of texture OpenGL used to render my polygon, can I just use glTexImage2D to read my data and create a texture of any kind it likes?
I will check my code again to see if the data pointer is valid, though I have checked it many time before.

I dig into the redbook again and find that Lev is right, the internalformat discribe the pixel format in video memory. Since I do not care about what format it uses, I set internalformat to GL_RGBA to let OpenGL decide the internal format. But what shall the parameter “format” be? Lev suggest me to set format to GL_RGB5_A1, and I tried, but a GL_INVALID_ENUM error occur, so GL_RGB5_A1 should not goto “format”. Then what shall I set?
I set the format to GL_RGB to let OpenGL think there is only red component in the data, and I set type to GL_UNSIGNED_SHORT, that will not crash, but the texture is diffrent depth of red. That means my data is valid.
I can not find the constant GL_UNSIGNED_SHORT_5_5_5_1 DFrey told me in the gl.h or redbook, can you tell me more about it?(I am using OpenGL in VC, I think it’s version is 1.1)

I have digged into many sites of programming, but not a single line of source can I find about loading a 16bit texture. Anyone knows the answer?

[This message has been edited by Nil_z (edited 01-17-2001).]

Ah, yes, GL_UNSIGNED_SHORT_5_5_5_1 is defined in OpenGL 1.2. You can get an updated OpenGL header from sgi . Though in order to use the constant, you’d have to ensure that you were running with OpenGL 1.2, and if not, expand your file’s 5551 short format to rgba8 format, then just use GL_UNSIGNED_BYTE with the converted data.

[This message has been edited by DFrey (edited 01-17-2001).]

Well, I have done that. Now that it works, but it is rather painful to allocate a block of memory, unpack the 16bit data, create the texture and free the memory for every 16bit texture. Must I unpack the data myself?

The driver will do the format conversions for you. format and type are the input format, and internalformat is the texture format. So, for example, you can use GL_UNSIGNED_BYTE/GL_RGBA data for any texture format: GL_RGBA, GL_RGBA8, GL_RGB5_A1, GL_RGBA4, GL_RGB, GL_COMPRESSED_RGB_ARB, even GL_LUMINANCE or GL_INTENSITY. (In these last two cases, the red will be used as the luminance or intensity component.)

  • Matt

I think I have write the source in my first post here. It is exactly what mcraighead
mentioned, but it cause a illegal operation error.