Does the graphics card store textures as floats or short ints?

Hi, considering opengl needs colours to be fed in as floats (0.00 to 1.00), does that mean the graphics card stores these as floats (4 bytes per channel) or short ints (1 byte per channel)?

If floats are used, how can we force the card to use ints, thereby reducing texture memory useage?

Apologies if this is a stupid question.

How a texture is stores is an implementation detail you don’t have to know about. If you specify a generic format for the internal format, like GL_RGB, GL_RGBA, when uploading the texture, the driver is free to choose whatever internal format it wants. You can tell the driver to choose a specific format, like GL_RGBA4 (4 bits per channel, 16 bits in total), or GL_RGBA8 (8 bits per channel, 32 bits in total), but these are just hints and the driver can still choose another format.

So in short, you can’t tell the driver how to store the texture, only give hints.

And if you want to reduce texture memory usage, consider using texture compression.

Hi, considering opengl needs colours to be fed in as floats (0.00 to 1.00)…

Not true. There are also versions of glColor for unsigned bytes, and many other data types. Same goes for glTexImage2D, you can specify unsigned bytes, along with other data types.

Thanks a lot for the quick answers, and thanks for the tip, D, there are other versions of glColor tht take GLbytes, GLubytes, GLushorts, or GLuints as arguments, as you pointed out: I think I’ll use GLuints from now on.

Up to now I was converting my textures from uints to floats at every loading…

:smiley:

[This message has been edited by Keermalec (edited 03-25-2003).]