Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Page 1 of 2 12 LastLast
Results 1 to 10 of 12

Thread: Allow for DEPTH_STENCIL_TEXTURE_MODE to be sampler state

  1. #1
    Advanced Member Frequent Contributor
    Join Date
    Apr 2009
    Posts
    590

    Allow for DEPTH_STENCIL_TEXTURE_MODE to be sampler state

    Worried that I read the specification wrong, but I think it says that DEPTH_STENCIL_TEXTURE_MODE is texture state only (i.e. not sampler state, looking at table 23.15 vs 23.18 is how I think this).

    At any rate, as a convenience, make DEPTH_STENCIL_TEXTURE_MODE also sampler state. As a side note, yes one can do the TextureView thing, but the sampler methodology seems much nicer.

    Along those lines, why are not TEXTURE_COMPARE_MODE, TEXTURE_COMPARE_FUNC, and for that matter all the SWIZZLE's too in sampler too?

  2. #2
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948
    I'm guessing that the ARB wants to keep Sampler Objects restricted to what their D3D equivalents do. This could be for hardware reasons (sampler objects representing some fundamental piece of hardware, with swizzle state and others being a different set of parameters) or just D3D compatibility ones.

    Oh, and the comparison mode is already part of sampler state. I'm not sure how you missed that, since they're the second and third entries in 23.18

  3. #3
    Advanced Member Frequent Contributor
    Join Date
    Apr 2009
    Posts
    590
    Quote Originally Posted by Alfonse
    Oh, and the comparison mode is already part of sampler state. I'm not sure how you missed that, since they're the second and third entries in 23.18
    Truly, amazingly, embarrassing. Apparently I cannot read a table :P Oh my, there are two tables I cannot read correctly: looking at table 8.21, DEPTH24_STENCIL8 is NOT in the 32-bit column. So my initial comment about TextureView is wrong too.

    What the?! This is insane, one can only read a DEPTH24_STENCIL8 as either stencil or depth but not both, even if one were to use 2 texture units?! This looks like an oversight to me, not a hardware issue.... Weird.

    If it truly is not a hardware issue, doubles my case for it.

  4. #4
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948
    From the paragraph right before that table:

    Quote Originally Posted by The Spec
    The two textures’ internal formats must be compatible according to table 8.21 if the internal format exists in that table. The internal formats must be identical if not in that table.

  5. #5
    Advanced Member Frequent Contributor
    Join Date
    Apr 2009
    Posts
    590
    Oops. Indeed. Note to self: writing too late at night == writing junk.

    So my initial comment on TextureView was correct, so.... since one can get the same functionality via TextureView, I have a hard time believing it is a hardware issue.

    Seems silly to me to make a new texture view to change how to sample data... I'd argue that anything that affects sampling that does not touch the texture data should be in the sampler object API too.

    I'd bet there were flame wars internally at the ARB about it too, but I cannot really figure out a side for the current situation.

  6. #6
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948
    I looked over some of AMD's documentation, and it seems that their hardware (at least up to HD 6xxx's) has a clear dichotomy of state. There is texture state and sampler state. Sampler state is exactly what OpenGL and D3D's Sampler Objects cover: the sampling state parameters passed directly to the "fetch from texture" sampling opcode.

    The parameters for texture swizzle (and it's likely that stencil texturing is nothing more than a special swizzle operation) is a part of the texture or buffer object resource itself.

    It's not that they couldn't change it. It's that to do so would make the implementation less practical. As it stands, a sampler object can be represented as a few bytes that get uploaded to a certain place in memory. What you want means that there has to be cross-talk when the texture object and sampler object get used together.

    Admittedly, it's very inelegant, as it's exposing a hardware limitation to the user in a way that's meaningless for the user. One would expect fetching parameters like swizzles to be part of the sampler object. But due to hardware restrictions, they don't do that.

  7. #7
    Advanced Member Frequent Contributor
    Join Date
    Dec 2007
    Location
    Hungary
    Posts
    985
    I'm a little late to the party, but have some comment on the topic.

    DEPTH_STENCIL_TEXTURE_MODE is really something similar like the texture swizzle parameters. They are part of the texture object because the texture has the format that could be swizzled. Having it as sampler state wouldn't make much sense neither from software or hardware point of view.

    Don't think about sampler objects being something that purely exist only in the API. Core OpenGL meant to expose only what the hardware is capable of.
    Disclaimer: This is my personal profile. Whatever I write here is my personal opinion and none of my statements or speculations are anyhow related to my employer and as such should not be treated as accurate or valid and in no case should those be considered to represent the opinions of my employer.
    Technical Blog: http://www.rastergrid.com/blog/

  8. #8
    Advanced Member Frequent Contributor
    Join Date
    Apr 2009
    Posts
    590
    I don't quite buy that argument.

    Indeed, one can easily (ab)use the TextureView interface to do what I am asking for; One point that is worth noting is that texture swizzle does not necessarily make sense for all texture formats. That is dead on true, but the specification already violates this since, as Alfhonse pointed out, TEXTURE_COMPARE_MODE and TEXTURE_COMPARE_FUNC are part of sampler state and they only apply to depth textures. .. we can argue that those values talk about how to sample, but one can also argue swizzling is too.

    The argument on the hardware side seems paper-thin to me because one can emulate this behavior already by doing the TextureView thing; admittedly, again as Alfhonse pointed out, it potentially complicates a GL implementation since a piece of hardware (ATI in this case, but I imagine NVIDIA and Intel likely follow D3D conventions too) have that hardware texture state is data source + how to interpret the data and sampler state is filtering and some of how to interpret the data. My knee jerk though it is that GL does it this way mostly because D3D does it this way... which is not sufficient to limit it to what D3D does.

    We are not talking a big deal really; a developer can always emulate this by wrapping the GL jazz in their own crud, but that seems foolish to demand. It is not like doing this is going to make anything slower, it is not like doing this is going to make a GL implementation so much harder to do...we are just talking checking if the current hardware texture state matches that of the sampler state and then sending those numbers through...

    The argument would make a lot more sense, if the sampler object was selectable from the shader or if a shader could determine how a sampler samples in GLSL.. but none of that is the case.... so... returning to my point. We can emulate it now via TextureView, at no harm.. and the use for accessing both the depth an stencil do exist, but it seems silly for the API to force it to be done this way.

  9. #9
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948
    Here's the really silly part: the AMD documentation clearly states that sRGB conversion... is part of the sampler state. OpenGL (without EXT_sRGB_decode) makes this not merely part of the texture's state, but part of the texture's format. A change to which requires destroying the old storage and creating new (or mapping with a view).

    So they already have to check the texture object and modify what they upload based on that. Though admittedly, the sampler object could just store versions for sRGB and not sRGB, and upload one or the other based on the texture.

    We are not talking a big deal really; a developer can always emulate this by wrapping the GL jazz in their own crud, but that seems foolish to demand.
    Or they could not do that. Generally speaking, there's not a whole lot of need to use one texture object with lots of different sampling parameters. It is important to be able to do that, but the uses aren't exactly legion. And even fewer are the times when you need to change swizzle masks multiple times for the same texture.

    So there's no reason to "emulate" this in any way. Just create a couple of views of the texture. One for depth, one for stencil. Or 2-3 for whatever different swizzle masks you need. You treat them as "different" textures. Outside of the depth/stencil thing, there just isn't that much need to modify these parameters on the fly.

    we are just talking checking if the current hardware texture state matches that of the sampler state and then sending those numbers through...
    Well, consider how the driver makers want to implement it, for performance reasons. In D3D, these state objects are immutable, so there, they can just bake some data into memory and shove it at the card as needed. If you bind a texture or sampler, they know you intend to use it exactly as it is written, so they can send it.

    In OpenGL, the objects are always mutable. So they can't bake the data until you actually render with the object. But that's fine; if you render with it and don't ever change their state, they can get equivalent performance to D3D's immutable state objects by baking the data at render time.

    The problem is that, if texture and sampler objects don't conform to the hardware, then they'll have to modify the "baked" data every time you use them. Each pair of texture+sampler will need to reconfigure the register data. Every time. Dynamically, every frame. So you can never get equivalent performance to D3D's immutable state objects.

  10. #10
    Advanced Member Frequent Contributor
    Join Date
    Dec 2007
    Location
    Hungary
    Posts
    985
    Quote Originally Posted by kRogue View Post
    Indeed, one can easily (ab)use the TextureView interface to do what I am asking for; One point that is worth noting is that texture swizzle does not necessarily make sense for all texture formats.
    Texture views allow you to reinterpret the internal format (e.g. "cast" a floating point texture to an integer format). Swizzle is for defining how you want your components to be interpreted (e.g. simulate luminance or intensity textures). The two things are completely orthogonal to each other. Not to mention that you cannot use texture views to reinterpret depth/stencil textures.

    Quote Originally Posted by kRogue View Post
    That is dead on true, but the specification already violates this since, as Alfhonse pointed out, TEXTURE_COMPARE_MODE and TEXTURE_COMPARE_FUNC are part of sampler state and they only apply to depth textures.
    That's a different story. Texture compare mode/func are telling about how you sample the texture, i.e. that you want the sampling to go through a conversion function, namely the compare function.
    The only real badly designed thing in OpenGL in this regard is that we have sRGB textures, while in practice, this is once again should be something that should belong to sampler state as it tells how to sample from the texture (again, a conversion function is applied).

    Quote Originally Posted by kRogue View Post
    .. we can argue that those values talk about how to sample, but one can also argue swizzling is too.
    Swizzling, and thus depth/stencil texture mode are a different kind of animal.

    Quote Originally Posted by kRogue View Post
    The argument on the hardware side seems paper-thin to me because one can emulate this behavior already by doing the TextureView thing;
    No, that's not true. A depth stencil format texture cannot be "viewed" as a depth or a stencil format. If you check the spec, there's no compatibility class for depth/stencil textures and they thus cannot be casted to anything else.

    Quote Originally Posted by kRogue View Post
    We are not talking a big deal really; a developer can always emulate this by wrapping the GL jazz in their own crud, but that seems foolish to demand. It is not like doing this is going to make anything slower, it is not like doing this is going to make a GL implementation so much harder to do...we are just talking checking if the current hardware texture state matches that of the sampler state and then sending those numbers through...
    In practice, this is something that would make things slower, no matter if a wrapper or the driver does it as you have to cross-validate texture object and sampler object state.

    Quote Originally Posted by kRogue View Post
    The argument would make a lot more sense, if the sampler object was selectable from the shader or if a shader could determine how a sampler samples in GLSL.. but none of that is the case....
    In theory, you could do that. D3D does have separate sampler and texture state visible to the shaders too. It's just that OpenGL historically has only a single object to represent both in GLSL. That's all.
    Disclaimer: This is my personal profile. Whatever I write here is my personal opinion and none of my statements or speculations are anyhow related to my employer and as such should not be treated as accurate or valid and in no case should those be considered to represent the opinions of my employer.
    Technical Blog: http://www.rastergrid.com/blog/

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •