Which texture-formats are usually natively supported?

Currently i am rewriting my texture-manager.

I would like to use RGB, RGBA and greyscale (intensity) images in 16 Bit and 32 Bit precision. Plus, i want to be able to use S3TC.

So now i use for 16 Bit:

GL_RGB5 - for RGB images
GL_RGBA4 - for RGBA images
GL_INTENSITY4 - for greyscale images
GL_COMPRESSED_RGB_S3TC_DXT1_EXT - for compressed RGB images
GL_COMPRESSED_RGBA_S3TC_DXT3_EXT - for compressed RGBA images

And in 32 Bit mode i use:
GL_RGB8 - for RGB images
GL_RGBA8 - for RGBA images
GL_INTENSITY8 - for greyscale images
GL_COMPRESSED_RGB_S3TC_DXT1_EXT - for compressed RGB images
GL_COMPRESSED_RGBA_S3TC_DXT5_EXT - for compressed RGBA images

I use compression only for 2D and Cubemap textures. For 1D and 3D textures i don´t use compression.

Can someone tell me, if there are texture-formats (which i use), which are not natively supported by most gfx-cards?
For example, i am not sure if GL_RGB5, GL_INTENSITY4 and GL_RGB8 will really be faster, since they use not a common alignment. And how about the GL_INTENSITY16 format? Is that usefull?

Or does a card which is not able to support fast texture-access for a given format just resort to another format?

Thanks in advance,
Jan.

AFAIK, most (all?) drivers will fall back to a lower precision internal format when the selected format isn’t supported.

I use 16bit/channel (GL_RGB16) formats for my HDR textures, but that’s only supported on the more recent cards (ATI R3XX, and nVidia NV3X).

Jan,
you must check for presence of the ARB_texture_compression extension before using compressed formats.
For unsupported formats the driver will just do a conversion to Something Else™, you needn’t worry about this.

GL_RGB5 yields R5G6B5 on NV/ATI cards, others may do R5G5B5 with an unused bit thrown in. In both cases, alignment is perfect.

[b]

Can someone tell me, if there are texture-formats (which i use), which are not natively supported by most gfx-cards?
[/b]
Do you want to hardcode this? I guess figuring this out at run time is pretty hard (if even possible, which I think it is not).

I use 16bit/channel (GL_RGB16) formats for my HDR textures, but that’s only supported on the more recent cards (ATI R3XX, and nVidia NV3X).

Ack! I hoped 16bit integer constrained to [0-1] were here. I think this is bad since those formats has been here for years.

You could try this:

int r=0,g=0,b=0,a=0,l=0,i=0;
glGetTexLevelParameteriv(GL_TEXTURE_2D,0,GL_TEXTURE_RED_SIZE,&r);
glGetTexLevelParameteriv(GL_TEXTURE_2D,0,GL_TEXTURE_GREEN_SIZE,&g);
glGetTexLevelParameteriv(GL_TEXTURE_2D,0,GL_TEXTURE_BLUE_SIZE,&b);
glGetTexLevelParameteriv(GL_TEXTURE_2D,0,GL_TEXTURE_ALPHA_SIZE,&a);
glGetTexLevelParameteriv(GL_TEXTURE_2D,0,GL_TEXTURE_LUMINANCE_SIZE,&l);
glGetTexLevelParameteriv(GL_TEXTURE_2D,0,GL_TEXTURE_INTENSITY_SIZE,&i);

Assuming the driver doesn’t outright lie to you, this should be enough to dissect the native format.

[This message has been edited by zeckensack (edited 09-30-2003).]

The driver will convert to a supported format internally, so all you’ll end up saving is the CPU hit of conversion on upload.

GL_BGRA pixel storage is usually native for UNSIGNED_BYTE data, as is often the _565_REV storage for BGR pixels.

DXT3 and DXT5 get the same amount of compression. Except DXT3 almost always looks worse than DXT5 in the alpha channel. All three (DXT1, 3 and 5) use the exact same format for storing the RGB values. Thus, my recommendation is to forget about DXT3 unless you have a special need for it.

INTENSITY8 is likely the only intensity format natively supported, if any (I could see a driver converting this to RGBA8 internally :-).

Last, smaller (internal) formats are likely faster than larger, because they use less texture RAM bandwidth. Thus, DXT1 is the best possible format, if you can live with the artifacts.

I guess the best way of making sure whether you get the CPU hit is to upload N textures of a specific format, drawing a very small quad with each, swapbuffers and repeat; check with VTune whether you’re CPU bound in a conversion loop, or whether you’re waiting for the card to complete.

Originally posted by zeckensack:
[b]You could try this:

[quote]

int r=0,g=0,b=0,a=0,l=0,i=0;
glGetTexLevelParameteriv(GL_TEXTURE_2D,0,GL_TEXTURE_RED_SIZE,&r);
glGetTexLevelParameteriv(GL_TEXTURE_2D,0,GL_TEXTURE_GREEN_SIZE,&g);
glGetTexLevelParameteriv(GL_TEXTURE_2D,0,GL_TEXTURE_BLUE_SIZE,&b);
glGetTexLevelParameteriv(GL_TEXTURE_2D,0,GL_TEXTURE_ALPHA_SIZE,&a);
glGetTexLevelParameteriv(GL_TEXTURE_2D,0,GL_TEXTURE_LUMINANCE_SIZE,&l);
glGetTexLevelParameteriv(GL_TEXTURE_2D,0,GL_TEXTURE_INTENSITY_SIZE,&i);

Assuming the driver doesn’t outright lie to you, this should be enough to dissect the native format.

[This message has been edited by zeckensack (edited 09-30-2003).][/b][/QUOTE]

Wow!
This works by binding a texture before?
Does this give meaningful results (i.e. will the driver return what you’re suggesting or what asked at allocation time)?

Originally posted by Obli:
Wow!
This works by binding a texture before?
Does this give meaningful results (i.e. will the driver return what you’re suggesting or what asked at allocation time)?

Yep, this works as long as you have a texture bound, and the level 0 mipmap has been specified (you can obviously do it for 1D, 3D, rectangle targets, too).

Regarding meaningful results, yes, I think so. If it were a quick hack, an RGB5 internal format wouldn’t report six bits for the green component.

As I said, “if the driver doesn’t outright lie to you” …

I used your tip to retrieve the format, which the driver stores.
The results look really good, nearly all textures get stored as expected. I´m not sure but i think the GL_RGB8 texture got stored as GL_RGBA8.

Thanks,
Jan.