How do I make my texture 16/32 bit in OGL?

Hello.
I was wondering how express the difference between 32 and 16 bit textures in OpenGL.
I only load TGA’s and I think all of my textures are 24 bit. How could I make them 16 for OpenGL at load time?
Has it to do with the way I read the data or the way I use stuff like GL_RGB (personally I think this only controls my channels)?

What I want to do in the end is to give the user the ability to force textures into 16bit if they want to.

Thanks for any help!

[This message has been edited by B_old (edited 03-15-2003).]

Select a 16 bit internal format for use in your glTextureImage call. Like GL_RGBA4, GL_RGB5, or GL_RGB5_A1.

Ah, OK thank you!
Do you know any good resource to read about this? I’ll try the specs now.

Hello.
Could it be that the textures are in 16bit by standard? I did not notice and difference if forcing them to be 16bit but performance did drop when I forced them to be 24bit.

BTW, do you have a truly 24/32 TGA-file?
I find it very hard to see any difference between my textures in 16 vs. 24.
Thanks for the help!

If the internal format of the texture is different that the format of the frame buffer, a conversion to match the format of the frame buffer. When you don’t specify any bit depth for the internal format, that us, you use GL_RGB, GL_RGBA and so on, the driver generally use the same internal format as the frame buffer and no conversion is needed. If your source is 24 bit and the frame buffer is 16, the texutre data is generally converted to 16 bit, and vice versa.

Unless you have a really good reason to explicitly specify the internal format, you should really let the driver choose exact format. This means you should stick to the generic formats like GL_RGBA, and not GL_RGBA4 , GL_RGB5_A1 or GL_RGBA8.

Ah, OK thanks for the hint.
It’s just that I think that I saw an option in some games that let you choose the bit-depth for textures. Why would they do that?