Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Page 1 of 2 12 LastLast
Results 1 to 10 of 19

Thread: mutable texture formats

  1. #1
    Member Regular Contributor
    Join Date
    Apr 2004
    Posts
    251

    mutable texture formats

    In the newer d3ds (10 and up) there is a mechanism whereby one can effectively change an texture format - it is called "views" there.
    This is hardly one of the more useful features. Actually I only know of one practical use, that is, to change from srgb to non-srgb and the other way around (which was just a sampler state in previous directx-es).

    Anyway, as the various vendors actually support the newer d3ds, and with them this particular feature (the mutable texture formats), why can't we access this hardware capability from opengl?
    So i suggest that we make the texture formats "mutable". That is, once the texture is created, its format can be changed with glTexParameteri.
    The sampelr objects will also possess a format state, which will override the texture's, just like the other parameters.
    Of course we will have to divide the various formats into groups of mutual compatibility. Then a texture's format can be changed only to one of those that are from the compatibility group of the original format.
    If a sampler's format is not compatible with the bound texture we act as in other similar cases - e.g. generate INVALID_OPERATION on the subsequent draw commands.

    The format is actually property of the texture levels and not of the texture object itself, but this is so only for historical reasons. (different mip formats within the same texture is possible but it only makes the texture unusable and is a burden for the drivers to deal with).
    Changing the format with glTexParameteri will apply to all mips at once.

    This will give the full functionality of the directx-es, but with much simpler and intuitive api (the directx way is way too complex and bloated with the numerous pointless objects you have to create and manage).

    What do you think?

  2. #2
    Senior Member OpenGL Pro
    Join Date
    Jan 2007
    Posts
    1,217
    The biggest flaw with the D3D10/11 approach is that creating a view is mandatory - there is no default view that you can just grab and use. It's a one-time-only op at creation time for sure, and anyone sensible will end up wrapping it with something more appropriate to their program's usage requirements, but mildly annoying all the same.

    On the other hand, it does allow for unification of a whole bunch of previously different resource types, and pretty much everything is now some variation on buffer or texture (even the depth buffer is just a regular Texture2D). It's actually quite an elegant setup really - a resource is the raw object and a view defines how the pipeline interprets that object, so there's a nice and clear separation going on. The initial (which are the current) implementations are a mite clunky, but like everything in D3D land they can be expected to get better over time. So for standard texturing you have a ID3D11Texture2D with an ID3D11ShaderResourceView created on it, but you can also create an ID3D11ShaderResourceView on an ID3D11Buffer object, allowing for easy render-to-vertex-buffer or texture-from-vertex-buffer if that's what you wanted to do.

    In many ways it behaves a lot like OpenGL's buffer object binding points, with a little bit of glTexParameter thrown in, in other words. So there's likely no requirement for OpenGL to specify any kind of full implementation of D3D views, with the specific use case you identified (mutable texture formats) being the only obvious one, and correctly expressed as a glTexParameter or glSamplerParameter. So long as the mandatory "you must create a view" requirement didn't exist, it sounds fine and reasonable.
    Last edited by mhagain; 07-18-2012 at 04:28 PM.

  3. #3
    Member Regular Contributor
    Join Date
    Apr 2004
    Posts
    251
    As it is now, the texture format is not so well-defined because it is specified separately for each mip. The texture is considered complete if all mips have the same format, but after that the user still can change some mitp's format and texture again becomes incomplete.
    Then it may be difficult to define which is the "original" format to which we want any new one set with glTexParameteri to be compatible.
    For this reason we may want to allow "mutable formats" only for the "immutable textures" created through the new extension GL_ARB_texture_storage.

  4. #4
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948
    Exactly how would this work? What would it mean to change the format of a texture from GL_RGBA8 to GL_RGBA16? I don't know much about how D3D10+ works in these cases.

    My concern is that there won't be very many valid re-interpretations of data, and even fewer useful ones. For example, you could turn GL_RGBA8 into GL_RGB10_A2 or GL_R32F or something. But what exactly does that gain you in terms of real usefulness? What problem does this solve?

    Also, there's the issue of specifying behavior, essentially forcing implementations to do things the D3D way. I don't care much for that idea.

    Most importantly, you said, "I only know of one practical use, that is, to change from srgb to non-srgb and the other way around (which was just a sampler state in previous directx-es)." We already have that, though only as an extension (EXT_texture_srgb_decode) for the time being. This seems like a far less intrusive way of getting the useful functionality of this concept.

    OpenGL doesn't have to expose every possible thing that hardware could do. It just needs to expose all of the useful things it can do.

  5. #5
    Senior Member OpenGL Pro
    Join Date
    Jan 2007
    Posts
    1,217
    Reviewing the DXSDK, 10/10/10/2 formats aren't in the same type family as 8/8/8/8 so that's either one of the nice arbitrary restrictions that D3D likes to hit you with every now and then, or there was a practical reason behind it. The only thing that would have given you is a psychedelic screen effect without any post-processing, which is probably not in very high demand.

    In reality the purpose of views in D3D is something entirely different, and mutability of texture formats is a side effect rather than the main objective (which was separation of resource definition from how the resource is to be used, and which allowed for more generalization of resource types - think of it as being kind of like mallocing a void * buffer then casting it to a struct type). It's also a limited mutability rather than full general, so you can't covert the example of GL_RGBA8 to GL_RGBA16 - they must be from the same type family, which tends to mean same number of components and same component size.

    If GL were to get this a more general mutability would have one practical use I can think of, and that's where you might have a texture that sometimes you want to access as RGBA8 and sometimes as R32F. Say you want to interpret one portion of it as depth info and one portion as colour info, and - because the texture is quite large - resource constraints prohibit you from creating two of them. True, it's a mite far-fetched, and true, you could do some fragment shader packing/unpacking (at the cost of a few extra instructions), but it is sneaking into the realm of things that could happen.

  6. #6
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948
    If GL were to get this a more general mutability would have one practical use I can think of, and that's where you might have a texture that sometimes you want to access as RGBA8 and sometimes as R32F. Say you want to interpret one portion of it as depth info and one portion as colour info, and - because the texture is quite large - resource constraints prohibit you from creating two of them. True, it's a mite far-fetched, and true, you could do some fragment shader packing/unpacking (at the cost of a few extra instructions), but it is sneaking into the realm of things that could happen.
    It seems to me that the way to handle this would be with a more flexible type system, something that would allow you to build an image format from raw components. You could create an image format that would be the equivalent of GL_R16F_GB8. I don't know how you would handle shadow accesses from such a texture, as those explicitly return a single float value.

    Even so, it's not of that much utility. If you need that level of flexibility, nobody's stopping you from using special shader logic to turn one texture's format into another. You could unpack the RG components of an RGBA8 texture into a float to emulate GL_R16F_GB8.

  7. #7
    Member Regular Contributor
    Join Date
    Apr 2004
    Posts
    251
    I don't know what was the purpose of the views that d3d designers had in mind (im not even sure if they had anything in mind at all), but the mutability of the texture formats is the only real technical effect of all this complex api.

    If opengl was to gain this feature, then i guess GL_RGBA8 and GL_RGBA16 would not be in the same compatibility group (they are not in d3d).
    But GL_RGBA8, GL_RGBA8I, GL_RGBA8UI, GL_RGBA8_SNORM and GL_SRGB8_ALPHA8 would all be in the same group.

    As for EXT_texture_srgb_decode, it would be great if ATI supported it.
    Mutable formats would be more general than the EXT_texture_srgb_decode.
    I don't know how much more useful they would be, but if they are available the people probably will think of new useful tricks that can be done with them.
    My particular interest in them is related to portability from d3d to opengl.
    Last edited by l_belev; 07-19-2012 at 02:50 AM.

  8. #8
    Senior Member OpenGL Pro
    Join Date
    Jan 2007
    Posts
    1,217
    The purpose of views in D3D is to enable generic backing storage to be interpreted in different ways, nothing more and nothing less. Like I said, it's a cast. Take this chunk of storage and use it as a texture, take that chunk and use it as a depth buffer, take the other chunk and use it as a render target, take a chunk that was previously used as a depth buffer and now use it as a texture (or vice-versa), but the specification and allocation of the backing storage remains reasonably agnostic to how it's going to be interpreted. Whether it was the correct way (or even a good way) to go about implementing this capability is neither here nor there for the purposes of this discussion.

    Limited format mutability is purely a side-effect of this. I seriously doubt if the designers had it in mind as an explicit goal; it seems more of a "hey, you can do this too" kind of thing.

    The specific example I gave above would be part of a more general capability, which would include being able to use different texture formats in the same texture atlas. So one subrect of the atlas could be RGBA8, another subrect could be RGB10A2, a third subrect could be R32F, and so on. That on it's own would extend the capabilities of texture atlases in a useful and interesting manner, and would mean that in order to interpret each subrect correctly you could do so without needing to change shaders. The principle could in theory be extended to texture arrays as well, which would have the effect of lifting one restriction associated with arrays, and again without the need to change shaders.

    Should OpenGL have any business defining how anything as specific as a texture atlas may be used? That's open for discussion. What is certain is that one person's resistance to the idea (and bearing in mind that this is the same one person who was resistant to the idea of separate shaders) does not make it a bad idea. Nor does it make it a good one, but it does remain one that is worth some further discussion.

  9. #9
    Senior Member OpenGL Pro
    Join Date
    Apr 2010
    Location
    Germany
    Posts
    1,129
    def OffTopic:
    Quote Originally Posted by mhagain
    this is the same one person who was resistant to the idea of separate shaders
    Who's that?

  10. #10
    Member Regular Contributor
    Join Date
    Apr 2004
    Posts
    251
    What you say does not appear to be the case. The create functions are CreateBuffer, CreateTexture2D, CreateTexture3D, etc. There is no CreateGenericStorage or something like that.
    Next, i don't see what "casting" the views are needed for. I mean, whats the difference between binding a texture view to a sampler instead of just binding a texture to the sampler?
    The ONLY thing the view is adding is it's ability to override the texture format. There is absolutely nothing else. But (as per my suggestion) the format changing could be done without the views.

    If you mean you can bind the same object (texture) for different things - once as a texture, then as a color buffer, then as something else, this already can be done in d3d9 and in opengl
    even though there are no views there. (in the newer opengl there is yet another set of binding points - image units). So we don't need views for this either.

    The views don't appear to be a meaningful additions to the api. A pure bloat.
    I think they are added for political rather than for technical reasons. Something like "keep adding more COM objects guys. the more they are, the less portable our api, the better for us. every bit counts".

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •