Omission of GL_DEPTH_TEXTURE_STENCIL_MODE from sampler object state?

When GL_DEPTH_TEXTURE_STENCIL_MODE was introduced I thought for sure it would allow you to sample both components of a single depth+stencil texture in a fragment shader, but because this state is tied exclusively to the texture object and not a sampler object that seems impossible to me.

Consider the following fragment shader:

#version 440

layout (binding=0) uniform sampler2D  depth_tex;   // These are the same texture ...
layout (binding=1) uniform usampler2D stencil_tex; //   just bound to different Texture Image Units

in  vec2  st;

void main (void) {
  float depth   = texture (depth_tex,   st);
  uint  stencil = texture (stencil_tex, st);	

  // ...
}

This is how the API would need to work to do what I want:

// Sampler objects do not track this state :(
glSamplerParameteri (depth_sampler,   GL_DEPTH_TEXTURE_STENCIL_MODE, GL_DEPTH_COMPONENT);
glSamplerParameteri (stencil_sampler, GL_DEPTH_TEXTURE_STENCIL_MODE, GL_STENCIL_INDEX);

// Texture Image Unit 0 will treat it as a depth texture
glBindSampler   (GL_TEXTURE0, depth_sampler);
glActiveTexture (GL_TEXTURE0);
glBindTexture   (GL_TEXTURE_2D, depth_stencil_tex);

// Texture Image Unit 1 will treat it as a stencil texture
glBindSampler   (GL_TEXTURE1, stencil_sampler);
glActiveTexture (GL_TEXTURE1);
glBindTexture   (GL_TEXTURE_2D, depth_stencil_tex);

Alas, since the only thing that tracks this state is the texture object, the fragment shader above has no possible way of working without making two copies of the texture object. It should be possible to avoid making two copies of the underlying image data using texture views, but having to do that seems needlessly complicated when we already have the ability to store states per-sampler.

Is this an oversight in the API, or is there a fundamental reason why this is not a sampler state? Texture comparison (GL_TEXTURE_COMPARE_MODE) for sampler2DShadow is stored per-sampler, so it seems logical to me that depth/stencil mode would be too.

Is this an oversight in the API, or is there a fundamental reason why this is not a sampler state?

I’d say that it is because it changes the fundamental nature of the texture (pulling from different components of the texture, which not even comparison mode changes). Or because D3D did it that way. Or because NVIDIA or AMD did it that way in their hardware.

Regardless of the reason, it is irrelevant. Because if you have access to stencil texturing, then you almost certainly also have access to view textures. So just create a view of the texture with different parameters.

I’d say that it is because it changes the fundamental nature of the texture (pulling from different components of the texture, which not even comparison mode changes)

That makes sense, view textures seem like an unnecessary hack though.

This is not a case where I am trying to reinterpret the image data (e.g. GL_R16UI to GL_R16I), but using identical (albeit special packed) internal formats with a single state changed. That is the sort of thing sampler objects were crated for; you would not use texture views to sample the same texture using different filter states for example.

But you are reinterpreting the data. You want to get at some integer data that happens to be stored in a texture that, normally, stores floating-point data. This is not a mere “sampling” case like changing filtering parameters or depth comparison modes; it’s a conceptual change in what the texture’s data really is. So even if you think of a view texture as merely a “reinterpretation” of a texture, that’s what you’re trying to do.

Think of it like this. If you had separate depth and stencil textures (which you could, but OpenGL doesn’t require that implementations allow you to use separate depth and stencil images in FBOs), you would have two texture objects. Each object would be manipulated separately, with their own texture parameters and such. By using view textures, you have the exact same effect. The only difference is that you’ve optimized the storage of your data by putting both textures in the same memory.

Also, you should not limit yourself by how you think something ought to be used. View textures are not for “reinterpreting the image data.” They are a feature that allows you to use the same storage in different ways. That might be to take a slice of a 2D array texture and use it as a 2D texture. That might be to reinterpret an image’s pixel data in different ways. Or it might be to allow you to use the same storage with two different sets of texture parameters. Which is what you want to do.

Tools are what you make of them.