Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Page 1 of 2 12 LastLast
Results 1 to 10 of 11

Thread: [NV304.48] Texture internal formats

  1. #1
    Intern Contributor
    Join Date
    Apr 2010
    Posts
    68

    [NV304.48] Texture internal formats

    It seems that not all internal formats are implemented. I'm expecting these two calls to produce the same texture rendering:
    Code :
    glTexImage(..,GL_RGBA4, ..., GL_RGBA, GL_UNSIGNED_BYTE, dataPtr); // texture 1
    glTexImage(..,GL_RGBA4, ..., GL_RGBA, GL_UNSIGNED_SHORT_4_4_4_4, dataPtr); // texture 2
    Result (texture1 above and texture2 below)
    Click image for larger version. 

Name:	rgba4.jpg 
Views:	123 
Size:	7.6 KB 
ID:	877
    I've also noticed problems for RGBA2 and R3_G3_B2 internalformats.
    Latest source code, makefiles and vs2010 solution available here: https://github.com/jdupuy/textureInt...zipball/master
    I've only tested on linux, would be nice if someone could give feedback for windows as well.

  2. #2
    Member Regular Contributor malexander's Avatar
    Join Date
    Aug 2009
    Location
    Ontario
    Posts
    325
    OpenGL implementations are not required to support those internal formats directly, and so Nvidia is likely up-converting them to 8b RGBA. You can check out which internal formats are required, and their minimum bitdepths, in section 3.9.3 of the GL4.3 spec (especially table 3.12).

    On modern hardware, there is probably no benefit to supporting these old texture formats anyway. If you want to save memory, use one of the supported compressed formats.

  3. #3
    Intern Contributor
    Join Date
    Apr 2010
    Posts
    68
    Quote Originally Posted by malexander View Post
    OpenGL implementations are not required to support those internal formats directly, and so Nvidia is likely up-converting them to 8b RGBA. You can check out which internal formats are required, and their minimum bitdepths, in section 3.9.3 of the GL4.3 spec (especially table 3.12).
    There is no such section in the GL4.3 specs. Assuming you are refering to table 8.12 in section 8.5.2, I have only found that each channel bit resolution is desired to be what I expect. I didn't see the part where it is stated that supporting these internal formats is not required... Besides, since the alpha channels can have a 1bit resolution (RGB5_A1), why can't the others ?

    Quote Originally Posted by malexander View Post
    On modern hardware, there is probably no benefit to supporting these old texture formats anyway. If you want to save memory, use one of the supported compressed formats.
    My only concern is implementation quality, not performance.

  4. #4
    Junior Member Regular Contributor
    Join Date
    Dec 2009
    Posts
    212
    Nvidia supports RGBA4 formats on all their chips (according to their documentation here), so I would have expected that it converts the RGBA8 data to RGBA4 as desired, but maybe the driver chose to skip the conversion and just use the RGBA8 data as is.

    The OpenGL specification does not require that implementation uses the desired internalformat exactly, it even allows the implementation to use fewer bits than requested (but not 0 bits if more than 0 bits were requested). You should be able to query the internalformat of your textures with glGetTexLevelParameter(...GL_TEXTURE_INTERNAL_FORM AT...).

  5. #5
    Intern Contributor
    Join Date
    Apr 2010
    Posts
    68
    Quote Originally Posted by mbentrup View Post
    Nvidia supports RGBA4 formats on all their chips (according to their documentation here), so I would have expected that it converts the RGBA8 data to RGBA4 as desired, but maybe the driver chose to skip the conversion and just use the RGBA8 data as is.
    Cool document! I have only tried my code on a fermi card, would be nice if someone with a GT200 or older model could try the rgba4 format and post feedback.

    Quote Originally Posted by mbentrup View Post
    You should be able to query the internalformat of your textures with glGetTexLevelParameter(...GL_TEXTURE_INTERNAL_FORM AT...).
    glGetTexLevelParameter returns the format specified in the texImage call, so it does not help. Perhaps the glGetTexLevelParameter behaviour should be changed then ? Or at least a message in debug output ?

  6. #6
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948
    There is no such section in the GL4.3 specs. Assuming you are refering to table 8.12 in section 8.5.2, I have only found that each channel bit resolution is desired to be what I expect.
    Actually, he forgot about the re-numbering of the specification; his section number would be correct for GL 4.2. In 4.3, what he's talking about is section 8.5.1: Required Texture Formats. Or you could just look at the Wiki.

    I didn't see the part where it is stated that supporting these internal formats is not required...
    Did you look in section 8.5, in the paragraph right before 8.5.1?

    Quote Originally Posted by The Spec
    If a sized internal format is specified, the mapping of the R, G, B, A, depth, and stencil values to texture components is equivalent to the mapping of the cor-responding base internal format’s components, as specified in table 8.11; the type (unsigned int, float, etc.) is assigned the same type specified by internalformat; and the memory allocation per texture component is assigned by the GL to match the allocations listed in tables 8.12- 8.13 as closely as possible. (The definition of closely is left up to the implementation. However, a non-zero number of bits must be allocated for each component whose desired allocation in tables 8.12- 8.13 is non-zero, and zero bits must be allocated for all other components).
    Note the parenthetical explanation. The very next section defines how this applies to the required formats.

    Besides, since the alpha channels can have a 1bit resolution (RGB5_A1), why can't the others ?
    Because hardware is not magic. These small formats are hard-coded into the hardware. And the only reason they're still there is for legacy applications anyway, there's no point in expanding on them.

    My only concern is implementation quality, not performance.
    Stop relying on the implementation to do your work for you. If you want to quantize your colors to 4-bits-per-channel, then do that before shoving the data to OpenGL. Yes, the spec does say that the GL implementation should do that for you, but that's not exactly well-tested code. Furthermore, implementers aren't going to bother fixing "bugs" from not properly quantizing inputs for the given internal format (rather than the actual internal format it substitutes).

    You should give OpenGL data that matches the internal format you provide. Reliance upon the conversion stuff is folly.

  7. #7
    Intern Contributor
    Join Date
    Apr 2010
    Posts
    68
    Quote Originally Posted by Alfonse Reinheart View Post
    Reliance upon the conversion stuff is folly.
    In this case, debug output would be helpful.

    I guess the only way to find out if a specific internal format is supported is by doing what I did: compare manual compression/quantification/whatever technique to the naive GL calls.

  8. #8
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948
    In this case, debug output would be helpful.
    On the list of things that debug output needs to talk about, this is a pretty low priority. Especially considering the following:

    I guess the only way to find out if a specific internal format is supported is by doing what I did: compare manual compression/quantification/whatever technique to the naive GL calls.
    That only tests whether the conversion works correctly. If you want to ask if a specific format has the specific component sizes that you asked for, all you need to do is ask. Once it's supported in your implementation, of course.

    Note that `GL_INTERNALFORMAT_SUPPORTED` is not what you're looking for. What you're looking for is `GL_INTERNALFORMAT_PREFERRED`; this will tell you if the driver likes this format or not. You can also query the color component bitdepths and types directly to get exactly what bitdepths/types a particular format provides. Even better, you can directly ask what the optimal pixel transfer parameters should be (via `GL_TEXTURE_IMAGE_FORMAT​` and `GL_TEXTURE_IMAGE_TYPE`).

    That's far superior to some debug message.

  9. #9
    Intern Contributor
    Join Date
    Apr 2010
    Posts
    68
    Quote Originally Posted by Alfonse Reinheart View Post
    If you want to ask if a specific format has the specific component sizes that you asked for, all you need to do is ask.
    Now that's a nice feature ! I wasn't aware of its existence. My bad. Thanks for pointing me to this

  10. #10
    Senior Member OpenGL Guru Dark Photon's Avatar
    Join Date
    Oct 2004
    Location
    Druidia
    Posts
    3,218
    Quote Originally Posted by Alfonse Reinheart View Post
    Note that `GL_INTERNALFORMAT_SUPPORTED` is not what you're looking for. What you're looking for is `GL_INTERNALFORMAT_PREFERRED...
    Yeah, that's definitely misleading. Fell into that trap myself. Feed it GL_RENDERBUFFER and SUPPORTED with a compressed texture format such as DXT1 or LATC1, and it says it's supported. Hmmm... Don't think so.

    That said, even with GL_RENDERBUFFER and PREFERRED, seems its returning the provided iformat (rather than TRUE/FALSE) for all iformats I've tried, even compressed tex formats such as DXT1 and LATC1...

    Code :
     glGetInternalformati64v( GL_RENDERBUFFER, GL_COMPRESSED_RGBA_S3TC_DXT1_EXT, 
                              GL_INTERNALFORMAT_PREFERRED, sizeof( buf ), buf );
    Last edited by Dark Photon; 09-25-2012 at 06:17 AM.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •