Allow for DEPTH_STENCIL_TEXTURE_MODE to be sampler state

Worried that I read the specification wrong, but I think it says that DEPTH_STENCIL_TEXTURE_MODE is texture state only (i.e. not sampler state, looking at table 23.15 vs 23.18 is how I think this).

At any rate, as a convenience, make DEPTH_STENCIL_TEXTURE_MODE also sampler state. As a side note, yes one can do the TextureView thing, but the sampler methodology seems much nicer.

Along those lines, why are not TEXTURE_COMPARE_MODE, TEXTURE_COMPARE_FUNC, and for that matter all the SWIZZLE’s too in sampler too?

I’m guessing that the ARB wants to keep Sampler Objects restricted to what their D3D equivalents do. This could be for hardware reasons (sampler objects representing some fundamental piece of hardware, with swizzle state and others being a different set of parameters) or just D3D compatibility ones.

Oh, and the comparison mode is already part of sampler state. I’m not sure how you missed that, since they’re the second and third entries in 23.18 :wink:

Truly, amazingly, embarrassing. Apparently I cannot read a table :stuck_out_tongue: Oh my, there are two tables I cannot read correctly: looking at table 8.21, DEPTH24_STENCIL8 is NOT in the 32-bit column. So my initial comment about TextureView is wrong too.

What the?! This is insane, one can only read a DEPTH24_STENCIL8 as either stencil or depth but not both, even if one were to use 2 texture units?! This looks like an oversight to me, not a hardware issue… Weird.

If it truly is not a hardware issue, doubles my case for it.

From the paragraph right before that table:

Oops. Indeed. Note to self: writing too late at night == writing junk.

So my initial comment on TextureView was correct, so… since one can get the same functionality via TextureView, I have a hard time believing it is a hardware issue.

Seems silly to me to make a new texture view to change how to sample data… I’d argue that anything that affects sampling that does not touch the texture data should be in the sampler object API too.

I’d bet there were flame wars internally at the ARB about it too, but I cannot really figure out a side for the current situation.

I looked over some of AMD’s documentation, and it seems that their hardware (at least up to HD 6xxx’s) has a clear dichotomy of state. There is texture state and sampler state. Sampler state is exactly what OpenGL and D3D’s Sampler Objects cover: the sampling state parameters passed directly to the “fetch from texture” sampling opcode.

The parameters for texture swizzle (and it’s likely that stencil texturing is nothing more than a special swizzle operation) is a part of the texture or buffer object resource itself.

It’s not that they couldn’t change it. It’s that to do so would make the implementation less practical. As it stands, a sampler object can be represented as a few bytes that get uploaded to a certain place in memory. What you want means that there has to be cross-talk when the texture object and sampler object get used together.

Admittedly, it’s very inelegant, as it’s exposing a hardware limitation to the user in a way that’s meaningless for the user. One would expect fetching parameters like swizzles to be part of the sampler object. But due to hardware restrictions, they don’t do that.

I’m a little late to the party, but have some comment on the topic.

DEPTH_STENCIL_TEXTURE_MODE is really something similar like the texture swizzle parameters. They are part of the texture object because the texture has the format that could be swizzled. Having it as sampler state wouldn’t make much sense neither from software or hardware point of view.

Don’t think about sampler objects being something that purely exist only in the API. Core OpenGL meant to expose only what the hardware is capable of.

I don’t quite buy that argument.

Indeed, one can easily (ab)use the TextureView interface to do what I am asking for; One point that is worth noting is that texture swizzle does not necessarily make sense for all texture formats. That is dead on true, but the specification already violates this since, as Alfhonse pointed out, TEXTURE_COMPARE_MODE and TEXTURE_COMPARE_FUNC are part of sampler state and they only apply to depth textures. … we can argue that those values talk about how to sample, but one can also argue swizzling is too.

The argument on the hardware side seems paper-thin to me because one can emulate this behavior already by doing the TextureView thing; admittedly, again as Alfhonse pointed out, it potentially complicates a GL implementation since a piece of hardware (ATI in this case, but I imagine NVIDIA and Intel likely follow D3D conventions too) have that hardware texture state is data source + how to interpret the data and sampler state is filtering and some of how to interpret the data. My knee jerk though it is that GL does it this way mostly because D3D does it this way… which is not sufficient to limit it to what D3D does.

We are not talking a big deal really; a developer can always emulate this by wrapping the GL jazz in their own crud, but that seems foolish to demand. It is not like doing this is going to make anything slower, it is not like doing this is going to make a GL implementation so much harder to do…we are just talking checking if the current hardware texture state matches that of the sampler state and then sending those numbers through…

The argument would make a lot more sense, if the sampler object was selectable from the shader or if a shader could determine how a sampler samples in GLSL… but none of that is the case… so… returning to my point. We can emulate it now via TextureView, at no harm… and the use for accessing both the depth an stencil do exist, but it seems silly for the API to force it to be done this way.

Here’s the really silly part: the AMD documentation clearly states that sRGB conversion… is part of the sampler state. OpenGL (without EXT_sRGB_decode) makes this not merely part of the texture’s state, but part of the texture’s format. A change to which requires destroying the old storage and creating new (or mapping with a view).

So they already have to check the texture object and modify what they upload based on that. Though admittedly, the sampler object could just store versions for sRGB and not sRGB, and upload one or the other based on the texture.

We are not talking a big deal really; a developer can always emulate this by wrapping the GL jazz in their own crud, but that seems foolish to demand.

Or they could not do that. Generally speaking, there’s not a whole lot of need to use one texture object with lots of different sampling parameters. It is important to be able to do that, but the uses aren’t exactly legion. And even fewer are the times when you need to change swizzle masks multiple times for the same texture.

So there’s no reason to “emulate” this in any way. Just create a couple of views of the texture. One for depth, one for stencil. Or 2-3 for whatever different swizzle masks you need. You treat them as “different” textures. Outside of the depth/stencil thing, there just isn’t that much need to modify these parameters on the fly.

we are just talking checking if the current hardware texture state matches that of the sampler state and then sending those numbers through…

Well, consider how the driver makers want to implement it, for performance reasons. In D3D, these state objects are immutable, so there, they can just bake some data into memory and shove it at the card as needed. If you bind a texture or sampler, they know you intend to use it exactly as it is written, so they can send it.

In OpenGL, the objects are always mutable. So they can’t bake the data until you actually render with the object. But that’s fine; if you render with it and don’t ever change their state, they can get equivalent performance to D3D’s immutable state objects by baking the data at render time.

The problem is that, if texture and sampler objects don’t conform to the hardware, then they’ll have to modify the “baked” data every time you use them. Each pair of texture+sampler will need to reconfigure the register data. Every time. Dynamically, every frame. So you can never get equivalent performance to D3D’s immutable state objects.

Texture views allow you to reinterpret the internal format (e.g. “cast” a floating point texture to an integer format). Swizzle is for defining how you want your components to be interpreted (e.g. simulate luminance or intensity textures). The two things are completely orthogonal to each other. Not to mention that you cannot use texture views to reinterpret depth/stencil textures.

That’s a different story. Texture compare mode/func are telling about how you sample the texture, i.e. that you want the sampling to go through a conversion function, namely the compare function.
The only real badly designed thing in OpenGL in this regard is that we have sRGB textures, while in practice, this is once again should be something that should belong to sampler state as it tells how to sample from the texture (again, a conversion function is applied).

Swizzling, and thus depth/stencil texture mode are a different kind of animal.

No, that’s not true. A depth stencil format texture cannot be “viewed” as a depth or a stencil format. If you check the spec, there’s no compatibility class for depth/stencil textures and they thus cannot be casted to anything else.

In practice, this is something that would make things slower, no matter if a wrapper or the driver does it as you have to cross-validate texture object and sampler object state.

In theory, you could do that. D3D does have separate sampler and texture state visible to the shaders too. It’s just that OpenGL historically has only a single object to represent both in GLSL. That’s all.

I can appreciate the point of view that swizzling is not sampler state, but I utterly disagree. Call me stubborn, or something else:p. I generally view sampling as how to interpret and filter the data. The obvious motivation is that how to filter the data specifies how to interpolate the texel values, which in a one sense is specifying how to approximate a function. A nice paper, which I would bet many here have read: (pdf-warning, A Pixel is Not a Little Square) http://alvyray.com/Memos/CG/Microsoft/6_pixel.pdf really hammers that point home.

But that it might just be me… I view a “texture” as just the bytes of the texels, and the interpretation of those bytes as sampling… that hardware has “texture state” as essentially format and data source and sampling as function expansion is not awful, but well fishy to me. I highly doubt that my request would have any real serious impact on performance, it is not like one is binding zillions of samplers and textures that the extra if checks are going to hammer anything; moreover, since the use patterns of a program are usually that samplers don’t change their state, then a GL implementation will likely have latching love to win anyways.

It just a request, because from a user point of view, I can do it more or less anyways, but with an extra (one time cost per texture) seems silly and a nuisance.

On the other hand, if GL had an interface in GLSL to choose the sampler in the shader, then the hardware argument wins big time… if there was an extension (or if it was in core) for just that then I’d happily take the current inconvenience for that functionality… but right now, seems silly.

It probably wouldn’t have a “serious” impact on performance, but it would have. It’s not really about the extra if-checks, but the fact that if you either change a texture binding or a sampler binding you have to cross-validate both as one depends on the other. This is the key problem here.

Also, the main point of OpenGL is abstract the underlying hardware. It should expose everything that’s in the hardware and nothing that isn’t. Exposing something that is a software thing only should be the responsibilitiy of a software abstraction layer.