choosing texture colour depth

Hi,

I noticed with my geforce 4, that in the display properties you can force opengl to use 16 bit textures…

Does this mean that if i load a texture and send it to opengl as 32 bit, that opengl will convert it to 16 bit?

for example
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA…)
will work?
It does, but i just want to make sure I am not missing something.

Let’s first cover these things without that driver override.

The internal format argument to TexImage can be either a “base format” or a “sized format”. GL_RGBA is a base format, as are GL_RGB, GL_ALPHA, GL_INTENSITY, GL_ALPHA_INTENSITY and - in newer revs of the core spec - GL_DEPTH_COMPONENT. If you use one of these for the internal format, the driver will give you some texture format that fits. You can’t predict the actual resolution of the texture.
(though you can query it with GetTexLevelParameter)
Traditionally, you’ll get a texture resolution equal to your display color depth, but this needn’t be true for all implementations and you shouldn’t rely on it.

Base formats are listed in table 3.15 of the spec. In the 1.5 version, that’s on page 128 (though Acrobat Reader will insist it’s page 141).

If you want more control over texture resolution, you need to use sized formats. Eg GL_RGBA8, GL_RGB5 etc. That’s table 3.16 on the next page. Take note of two things:
1)OpenGL image format conversions are automatic. I.e. you can load an image file as RGB8 (24 bpp) and tell OpenGL to convert it to something else, such as GL_RGB5, on the fly, when calling TexImage2D.

glTexImage2D(GL_TEXTURE_2D,0,[b]GL_RGB5[/b],256,256,0,GL_RGB,GL_UNSIGNED_BYTE);

2)the internal format parameter is still just a hint. Some formats may not be supported by the hardware (e.g. GL_ALPHA16 is hardly available), so they are silently converted to something else that is supported. In this case, you may get less precision than requested, but you may also get more precision than you requested. Most prominently, GL_RGB5 on most hardware I’ve seen, is actually R5G6B5.

Now, you can request component resolution with a sized internal format, or you can let the driver give you something by using a base internal format.

This driver control you mentioned will affect both of these approaches. This behaviour is not covered by the GL spec and one might argue that, by activating this switch, the GL driver is technically in violation of the spec - because the hardware could support higher component resolutions.

If it works for you, fine. If you intend to give your software to others, you should IMO explicitly request the internal format you want. Not everyone uses the same driver settings. Forcing 16 bit textures via the driver controls is primarily a performance tweak for applications using only base formats.

wonderful response!

So, as I understand from your post, opengl can convert an image to the internalformat you request (or to a closest supported format). I think that sums up what i was trying to do.

The problem i was running into was when using intel onboard. It only supports 16 bit textures, so i was trying to theorize what would happen if you called glTexImage2d with GL_RGB8 as the internal format, and a 24 bit image. I couldnt try it myself because i dont have the intel onboard. So i wondered if doing the display properties thing above, would emulate this kind of behaviour.

[This message has been edited by bumby (edited 01-28-2004).]

So i guess now my question is:

Would it be safe to do as i said, or should i convert the image myself into the proper format and submit it to opengl like that.

Is there a way to determine programatically what texture color resolution is supported by the video card?