PDA

View Full Version : [NV304.48] Texture internal formats



_blitz
09-24-2012, 05:30 AM
It seems that not all internal formats are implemented. I'm expecting these two calls to produce the same texture rendering:

glTexImage(..,GL_RGBA4, ..., GL_RGBA, GL_UNSIGNED_BYTE, dataPtr); // texture 1
glTexImage(..,GL_RGBA4, ..., GL_RGBA, GL_UNSIGNED_SHORT_4_4_4_4, dataPtr); // texture 2
Result (texture1 above and texture2 below)
877
I've also noticed problems for RGBA2 and R3_G3_B2 internalformats.
Latest source code, makefiles and vs2010 solution available here: https://github.com/jdupuy/textureInternalFormat/zipball/master
I've only tested on linux, would be nice if someone could give feedback for windows as well.

malexander
09-24-2012, 07:22 AM
OpenGL implementations are not required to support those internal formats directly, and so Nvidia is likely up-converting them to 8b RGBA. You can check out which internal formats are required, and their minimum bitdepths, in section 3.9.3 of the GL4.3 spec (especially table 3.12).

On modern hardware, there is probably no benefit to supporting these old texture formats anyway. If you want to save memory, use one of the supported compressed formats.

_blitz
09-24-2012, 09:10 AM
OpenGL implementations are not required to support those internal formats directly, and so Nvidia is likely up-converting them to 8b RGBA. You can check out which internal formats are required, and their minimum bitdepths, in section 3.9.3 of the GL4.3 spec (especially table 3.12).
There is no such section in the GL4.3 specs. Assuming you are refering to table 8.12 in section 8.5.2, I have only found that each channel bit resolution is desired to be what I expect. I didn't see the part where it is stated that supporting these internal formats is not required... Besides, since the alpha channels can have a 1bit resolution (RGB5_A1), why can't the others ?


On modern hardware, there is probably no benefit to supporting these old texture formats anyway. If you want to save memory, use one of the supported compressed formats. My only concern is implementation quality, not performance.

mbentrup
09-24-2012, 09:17 AM
Nvidia supports RGBA4 formats on all their chips (according to their documentation here (http://developer.download.nvidia.com/opengl/texture_formats/nv_ogl_texture_formats.pdf)), so I would have expected that it converts the RGBA8 data to RGBA4 as desired, but maybe the driver chose to skip the conversion and just use the RGBA8 data as is.

The OpenGL specification does not require that implementation uses the desired internalformat exactly, it even allows the implementation to use fewer bits than requested (but not 0 bits if more than 0 bits were requested). You should be able to query the internalformat of your textures with glGetTexLevelParameter(...GL_TEXTURE_INTERNAL_FORM AT...).

_blitz
09-24-2012, 09:27 AM
Nvidia supports RGBA4 formats on all their chips (according to their documentation here (http://developer.download.nvidia.com/opengl/texture_formats/nv_ogl_texture_formats.pdf)), so I would have expected that it converts the RGBA8 data to RGBA4 as desired, but maybe the driver chose to skip the conversion and just use the RGBA8 data as is.
Cool document! I have only tried my code on a fermi card, would be nice if someone with a GT200 or older model could try the rgba4 format and post feedback.


You should be able to query the internalformat of your textures with glGetTexLevelParameter(...GL_TEXTURE_INTERNAL_FORM AT...).
glGetTexLevelParameter returns the format specified in the texImage call, so it does not help. Perhaps the glGetTexLevelParameter behaviour should be changed then ? Or at least a message in debug output ?

Alfonse Reinheart
09-24-2012, 09:32 AM
There is no such section in the GL4.3 specs. Assuming you are refering to table 8.12 in section 8.5.2, I have only found that each channel bit resolution is desired to be what I expect.

Actually, he forgot about the re-numbering of the specification; his section number would be correct for GL 4.2. In 4.3, what he's talking about is section 8.5.1: Required Texture Formats. Or you could just look at the Wiki (http://www.opengl.org/wiki/Image_Format#Required_formats).


I didn't see the part where it is stated that supporting these internal formats is not required...

Did you look in section 8.5, in the paragraph right before 8.5.1?



If a sized internal format is specified, the mapping of the R, G, B, A, depth, and stencil values to texture components is equivalent to the mapping of the cor-responding base internal format’s components, as specified in table 8.11; the type (unsigned int, float, etc.) is assigned the same type specified by internalformat; and the memory allocation per texture component is assigned by the GL to match the allocations listed in tables 8.12- 8.13 as closely as possible. (The definition of closely is left up to the implementation. However, a non-zero number of bits must be allocated for each component whose desired allocation in tables 8.12- 8.13 is non-zero, and zero bits must be allocated for all other components).

Note the parenthetical explanation. The very next section defines how this applies to the required formats.


Besides, since the alpha channels can have a 1bit resolution (RGB5_A1), why can't the others ?

Because hardware is not magic. These small formats are hard-coded into the hardware. And the only reason they're still there is for legacy applications anyway, there's no point in expanding on them.


My only concern is implementation quality, not performance.

Stop relying on the implementation to do your work for you. If you want to quantize your colors to 4-bits-per-channel, then do that before shoving the data to OpenGL. Yes, the spec does say that the GL implementation should do that for you, but that's not exactly well-tested code. Furthermore, implementers aren't going to bother fixing "bugs" from not properly quantizing inputs for the given internal format (rather than the actual internal format it substitutes).

You should give OpenGL data that matches the internal format you provide. Reliance upon the conversion stuff is folly.

_blitz
09-24-2012, 01:48 PM
Reliance upon the conversion stuff is folly. In this case, debug output would be helpful.

I guess the only way to find out if a specific internal format is supported is by doing what I did: compare manual compression/quantification/whatever technique to the naive GL calls.

Alfonse Reinheart
09-24-2012, 02:22 PM
In this case, debug output would be helpful.

On the list of things that debug output needs to talk about, this is a pretty low priority. Especially considering the following:


I guess the only way to find out if a specific internal format is supported is by doing what I did: compare manual compression/quantification/whatever technique to the naive GL calls.

That only tests whether the conversion works correctly. If you want to ask if a specific format has the specific component sizes that you asked for, all you need to do is ask. (http://www.opengl.org/wiki/GLAPI/glGetInternalFormat) Once it's supported in your implementation, of course.

Note that `GL_INTERNALFORMAT_SUPPORTED` is not what you're looking for. What you're looking for is `GL_INTERNALFORMAT_PREFERRED`; this will tell you if the driver likes this format or not. You can also query the color component bitdepths and types directly to get exactly what bitdepths/types a particular format provides. Even better, you can directly ask what the optimal pixel transfer parameters should be (via `GL_TEXTURE_IMAGE_FORMAT​` and `GL_TEXTURE_IMAGE_TYPE`).

That's far superior to some debug message.

_blitz
09-24-2012, 04:59 PM
If you want to ask if a specific format has the specific component sizes that you asked for, all you need to do is ask. (http://www.opengl.org/wiki/GLAPI/glGetInternalFormat) Now that's a nice feature ! I wasn't aware of its existence. My bad. Thanks for pointing me to this :)

Dark Photon
09-25-2012, 06:01 AM
Note that `GL_INTERNALFORMAT_SUPPORTED` is not what you're looking for. What you're looking for is `GL_INTERNALFORMAT_PREFERRED...
Yeah, that's definitely misleading. Fell into that trap myself. Feed it GL_RENDERBUFFER and SUPPORTED with a compressed texture format such as DXT1 or LATC1, and it says it's supported. Hmmm... Don't think so.

That said, even with GL_RENDERBUFFER and PREFERRED, seems its returning the provided iformat (rather than TRUE/FALSE) for all iformats I've tried, even compressed tex formats such as DXT1 and LATC1...


glGetInternalformati64v( GL_RENDERBUFFER, GL_COMPRESSED_RGBA_S3TC_DXT1_EXT,
GL_INTERNALFORMAT_PREFERRED, sizeof( buf ), buf );

Alfonse Reinheart
09-25-2012, 07:28 AM
That said, even with GL_RENDERBUFFER and PREFERRED, seems its returning the provided iformat (rather than TRUE/FALSE) for all iformats I've tried

Actually, that's what it's supposed to do. I got that wrong; I was looking at the manpage documentation rather than the actual spec.