Help with semantics of Texture Mapping Units and sampler2D

A few years ago I asked a question about the semantics of texture units and shader2D uniform variables. The answers I got were rather rigid in their outlook:

I gave up then, but I’m back.

I’m looking for a ‘clean’ way to abstract my code away from explicitly assigning texture units (GL_TEXTURE3 etc.) to sampler2D variables. I am using an Nvidia GTX960 and OpenGL reports it has just 4 texture units. The 0th texture unit is reserved for the fixed pipeline so there are really only three available. If I wish to use various sampler2D GLSL uniform variables I, rather obviously, don’t care which texture unit they employ. What methods are commonly used to abstract away the necessary relationship?

I’d like to have some OpenGL code to find a ‘free’ texture unit anytime I want to address a GLSL shader2D variable. When one writes in C there is no need to know how many registers the CPU has or the assignment of variables to them. The compiler does all of that. I can’t change OpenGL or GLSL but I can attempt to hide the need to make manual ‘register’ assignments.

There is a level of indirection/assignment that is getting in the way of my writing a ‘clean’ approach.

I gave up then, but I’m back.

Yet you still haven’t managed to internalize what you were told in that thread over two years ago. For example:

I am using an Nvidia GTX960 and OpenGL reports it has just 4 texture units.

No, it doesn’t. According to the OpenGL database, the GTX 960 has 192. That is the value returned for GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS.

4 is the value returned by GL_MAX_TEXTURE_UNITS, but as was clarified in the old thread, “texture unit” is a term that only applies to legacy OpenGL functionality. In a core context, you can’t even query that.

You are merely confusing yourself by continuing to look at legacy functionality that was removed from OpenGL almost a decade ago.

The 0th texture unit is reserved for the fixed pipeline so there are really only three available.

No it isn’t.

If I wish to use various sampler2D GLSL uniform variables I, rather obviously, don’t care which texture unit they employ. What methods are commonly used to abstract away the necessary relationship?

The common method to abstract such things away is to do exactly what I said in the last thread: develop a simple convention.

You can describe the purpose of most textures based on what you do with it: albedo textures, emissive textures, normal maps, shadow maps, etc. Generally speaking, most shaders won’t need to apply two emissive textures to the same object.

So you assign each conceptual texture an arbitrary number. Your albedo textures are unit 0; normal maps go in unit 1, shadow maps in unit 10, etc.

It’s fundamentally no different from binding a texture to the “albedo_texture” string. In both cases, you require the GLSL shader to have something specified in it. In your case, you require that it use a particular string name. In my case, I require that it provide a particular texture with a binding value of 0.

I’d like to have some OpenGL code to find a ‘free’ texture unit anytime I want to address a GLSL shader2D variable. When one writes in C there is no need to know how many registers the CPU has or the assignment of variables to them. The compiler does all of that. I can’t change OpenGL or GLSL but I can attempt to hide the need to make manual ‘register’ assignments.

There is a level of indirection/assignment that is getting in the way of my writing a ‘clean’ approach.

Nothing has changed since the last thread. What you want is highly performance unfriendly. That’s precisely why it isn’t abstracted away; OpenGL is a low(er)-level API, and rebinding everything or changing a bunch of uniform state is not an acceptable way to maintain performance.

And let’s be honest here: it’s not like textures are the only resources to do this. This kind of mapping is used for all OpenGL resources. Image load/store, UBOs, and SSBOs, all have the same form of resource mapping. They may not use enumerators for their binding points, but they still have binding points set on shader constructs, and those points represent locations you have to bind the resource to.

To do what you want would require:

  1. For each program, building and keeping a list of every uniform variable which is of a sampler type. And probably of any other opaque type too. For each such variable, you would need to know its string name.

  2. Having a thread-local object which represents all of the textures currently bound. This would have to store, for each bound texture, the texture object bound, the current binding point for it, and the name in the shader for that binding point.

  3. Every time you change shaders, you must either change the uniform values for the samplers to point to any textures that use those names, or you must re-bind the textures to the binding points specified in the new shader.

That is not a worthwhile use of your resources.

To put this in perspective, since our previous discussion, whole new APIs have been written for high-performance graphics development. Vulkan, Direct3D 12, Metal. Know something that all of these APIs didn’t do?

They didn’t get rid of binding points for resources. They did change the abstraction quite a bit, but there are still binding points rather than names of shader constructs.

Maybe there’s a reason for that…

Ok. I think you mean sampler2D.

I’m looking for a ‘clean’ way to abstract my code away from explicitly assigning texture units (GL_TEXTURE3 etc.) to sampler2D variables.

Look at Bindless texture.

Indeed, I meant sampler2D. I shall look at the Bindless texture link you sent. Your response is so less snarky than that from Alfonse. Thank you.

Alfonse is correct that querying GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS gives me 192 for the GTX 960. It means I can, for my shader, make fixed assignments to TEXTURE_IMAGE_UNITS for all my sampler2D uniform variables.

I’m puzzled why a query of GL_MAX_TEXTURE_UNITS gives an answer of 4? (4 what?) – legacy things of some kind, I guess.

As Alphonse writes: “The common method to abstract such things away is to do exactly what I said in the last thread: develop a simple convention.” this is exactly the kind of thing a compiler can do.

Alfonse is correct that querying GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS gives me 192 for the GTX 960. It means I can, for my shader, make fixed assignments to TEXTURE_IMAGE_UNITS for all my shader2D uniform variables.

FYI: that’s the total number of texture image units that the hardware provides. Each shader stage has its own limits, so the total accessible from (for example) the fragment shader stage is less than that.

For that hardware, GL_MAX_TEXTURE_IMAGE_UNITS (the fragment limits) are 32.

As Alphonse writes: “The common method to abstract such things away is to do exactly what I said in the last thread: develop a simple convention.” this is exactly the kind of thing a compiler can do.

No, it can’t. Or rather, it can, but it wouldn’t be helpful.

The compiler can certainly make up an arbitrary mapping from binding points to in-shader names. But there’s no way that it can invent that mapping in such a way that you can a priori take a name and convert it to a number. So what you have to do is query what the binding point is from the in-shader variable name.

That’s different from inventing a convention. Why? Because if the shader invents a mapping that you have to query, then different shaders can invent different mappings. In one shader, “albedo_texture” might go to TIU 3; in another, it might go into TIU 5. So you cannot just bind your albedo texture into TIU 3; for each shader you use, you have to ask it which TIU to bind to.

The compiler cannot establish a convention that exists across different programs. Only you can do that.

And it’s not like you can’t just set these things directly in the shader:


layout(binding = 0) sampler2D albedo_texture;

It’s the maximum number of texture units that can be used with fixed-function texturing (glTexEnv(), gl[Multi]TexCoord(), etc).

If you’re using shaders, that value isn’t relevant.

See Nvidia’s FAQ.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.