[NV304.48] Texture internal formats

It seems that not all internal formats are implemented. I’m expecting these two calls to produce the same texture rendering:

glTexImage(..,GL_RGBA4, ..., GL_RGBA, GL_UNSIGNED_BYTE, dataPtr); // texture 1
glTexImage(..,GL_RGBA4, ..., GL_RGBA, GL_UNSIGNED_SHORT_4_4_4_4, dataPtr); // texture 2

Result (texture1 above and texture2 below)
[ATTACH=CONFIG]289[/ATTACH]
I’ve also noticed problems for RGBA2 and R3_G3_B2 internalformats.
Latest source code, makefiles and vs2010 solution available here: https://github.com/jdupuy/textureInternalFormat/zipball/master
I’ve only tested on linux, would be nice if someone could give feedback for windows as well.

OpenGL implementations are not required to support those internal formats directly, and so Nvidia is likely up-converting them to 8b RGBA. You can check out which internal formats are required, and their minimum bitdepths, in section 3.9.3 of the GL4.3 spec (especially table 3.12).

On modern hardware, there is probably no benefit to supporting these old texture formats anyway. If you want to save memory, use one of the supported compressed formats.

There is no such section in the GL4.3 specs. Assuming you are refering to table 8.12 in section 8.5.2, I have only found that each channel bit resolution is desired to be what I expect. I didn’t see the part where it is stated that supporting these internal formats is not required… Besides, since the alpha channels can have a 1bit resolution (RGB5_A1), why can’t the others ?

[QUOTE=malexander;1242794]On modern hardware, there is probably no benefit to supporting these old texture formats anyway. If you want to save memory, use one of the supported compressed formats.[/QUOTE] My only concern is implementation quality, not performance.

Nvidia supports RGBA4 formats on all their chips (according to their documentation here), so I would have expected that it converts the RGBA8 data to RGBA4 as desired, but maybe the driver chose to skip the conversion and just use the RGBA8 data as is.

The OpenGL specification does not require that implementation uses the desired internalformat exactly, it even allows the implementation to use fewer bits than requested (but not 0 bits if more than 0 bits were requested). You should be able to query the internalformat of your textures with glGetTexLevelParameter(…GL_TEXTURE_INTERNAL_FORMAT…).

Cool document! I have only tried my code on a fermi card, would be nice if someone with a GT200 or older model could try the rgba4 format and post feedback.

glGetTexLevelParameter returns the format specified in the texImage call, so it does not help. Perhaps the glGetTexLevelParameter behaviour should be changed then ? Or at least a message in debug output ?

There is no such section in the GL4.3 specs. Assuming you are refering to table 8.12 in section 8.5.2, I have only found that each channel bit resolution is desired to be what I expect.

Actually, he forgot about the re-numbering of the specification; his section number would be correct for GL 4.2. In 4.3, what he’s talking about is section 8.5.1: Required Texture Formats. Or you could just look at the Wiki.

I didn’t see the part where it is stated that supporting these internal formats is not required…

Did you look in section 8.5, in the paragraph right before 8.5.1?

Note the parenthetical explanation. The very next section defines how this applies to the required formats.

Besides, since the alpha channels can have a 1bit resolution (RGB5_A1), why can’t the others ?

Because hardware is not magic. These small formats are hard-coded into the hardware. And the only reason they’re still there is for legacy applications anyway, there’s no point in expanding on them.

My only concern is implementation quality, not performance.

Stop relying on the implementation to do your work for you. If you want to quantize your colors to 4-bits-per-channel, then do that before shoving the data to OpenGL. Yes, the spec does say that the GL implementation should do that for you, but that’s not exactly well-tested code. Furthermore, implementers aren’t going to bother fixing “bugs” from not properly quantizing inputs for the given internal format (rather than the actual internal format it substitutes).

You should give OpenGL data that matches the internal format you provide. Reliance upon the conversion stuff is folly.

[QUOTE=Alfonse Reinheart;1242800]Reliance upon the conversion stuff is folly.[/QUOTE] In this case, debug output would be helpful.

I guess the only way to find out if a specific internal format is supported is by doing what I did: compare manual compression/quantification/whatever technique to the naive GL calls.

In this case, debug output would be helpful.

On the list of things that debug output needs to talk about, this is a pretty low priority. Especially considering the following:

I guess the only way to find out if a specific internal format is supported is by doing what I did: compare manual compression/quantification/whatever technique to the naive GL calls.

That only tests whether the conversion works correctly. If you want to ask if a specific format has the specific component sizes that you asked for, all you need to do is ask. Once it’s supported in your implementation, of course.

Note that GL_INTERNALFORMAT_SUPPORTED is not what you’re looking for. What you’re looking for is GL_INTERNALFORMAT_PREFERRED; this will tell you if the driver likes this format or not. You can also query the color component bitdepths and types directly to get exactly what bitdepths/types a particular format provides. Even better, you can directly ask what the optimal pixel transfer parameters should be (via GL_TEXTURE_IMAGE_FORMAT​ and GL_TEXTURE_IMAGE_TYPE).

That’s far superior to some debug message.

[QUOTE=Alfonse Reinheart;1242811]If you want to ask if a specific format has the specific component sizes that you asked for, all you need to do is ask.[/QUOTE] Now that’s a nice feature ! I wasn’t aware of its existence. My bad. Thanks for pointing me to this :slight_smile:

Yeah, that’s definitely misleading. Fell into that trap myself. Feed it GL_RENDERBUFFER and SUPPORTED with a compressed texture format such as DXT1 or LATC1, and it says it’s supported. Hmmm… Don’t think so.

That said, even with GL_RENDERBUFFER and PREFERRED, seems its returning the provided iformat (rather than TRUE/FALSE) for all iformats I’ve tried, even compressed tex formats such as DXT1 and LATC1…

 glGetInternalformati64v( GL_RENDERBUFFER, GL_COMPRESSED_RGBA_S3TC_DXT1_EXT, 
                          GL_INTERNALFORMAT_PREFERRED, sizeof( buf ), buf );

That said, even with GL_RENDERBUFFER and PREFERRED, seems its returning the provided iformat (rather than TRUE/FALSE) for all iformats I’ve tried

Actually, that’s what it’s supposed to do. I got that wrong; I was looking at the manpage documentation rather than the actual spec.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.