semantics of Texture Mapping Units

I have a strong dislike for the naming of Texture Mapping Units Texture mapping unit - Wikipedia in OpenGL and GLSL.

Calling them: GL_TEXTURE0 GL_TEXTURE1 GL_TEXTURE2 GL_TEXTURE3 etc. in OpenGL is quite confusing since they are NOT textures but TMUs. Further within GLSL they are variables called Sampler2D etc. which is a further confusion.

Can OpenGL & GLSL have some new constants that map to these weird old ones?

[INDENT=2][b]#define GL_TMU Sampler2D
#define GL_TMU_0 GL_TEXTURE0
#define GL_TMU_1 GL_TEXTURE1
#define GL_TMU_2 GL_TEXTURE2
#define GL_TMU_3 GL_TEXTURE3

[/b][/INDENT]
It would make things clearer. I can do this in my own code but I think it would be beneficial to others too. Have I got this wrong?

  1. They’re “texture units”, not “texture mapping units”; they don’t have any involvement in mapping. Realistically, they’re not even “units” any more, just binding points.

  2. Whether you like or dislike the names, they’re here for good. Using made-up names even in your own code isn’t exactly a good idea; you’re still going to see the standard names in every other piece of OpenGL code you encounter, and you’d be well-advised to fix the names if you ever post code with the intent of having anyone else read it.

  3. sampler2D (note: lower-case “s”) is a type, not a variable name. Along with 39 others as of GLSL 4.50. I wouldn’t be surprised if that list increases in the future. And each GLSL type has a corresponding enumerant in the client API for e.g. glGetActiveUniform.

It could probably be argued that these should not have been specified as GLenums in the first place, but GLuint instead, using values of 0 upwards. Done is done however so far as the spec is concerned, and unless texturing is completely respecified it hardly seems a worthwhile use of the ARB’s time to change them.

I have a strong dislike for the naming of Texture Mapping Units Texture mapping unit - Wikipedia in OpenGL and GLSL.

If you really want to get technical, the OpenGL specification calls them “texture image units” (the term “texture unit” refers to the texture image unit plus the old glTexEnv stuff).

In any case, just use GL_TEXTURE0 + i, where i is the index of the texture image unit. That way, you can deal in texture image unit indices, which is the important thing to begin with, not what they are “named” in code.

Thank you all for your replies.

I remain rather confused by the nomenclature. If they are not Texture Mapping Units, in the Wikipedia sense, what the heck are they? Simply referring back to the specification is somewhat circular. On my GTX 960 I know I have four of them. So they are something not merely figments. What do they do, other than act as some kind of go-between for a texture id and a variable in GLSL. Apart from there only being just four, why should I need to know about them at all? Why doesn’t OpenGL simply manage them on some kind of LRU basis? They violate the zero-one-or-infinity rule: Zero one infinity rule - Wikipedia which is bad. It’s the kind of thing compilers are supposed to abstract away from the programmer.

The names for these things are a mess: GL_TEXTURE0 is not a texture. sampler2D is not a sampler. The names of things are important.

I like my code to be readable, with nice long 32 character identifiers and types and constants that are themselves meaningful. I’m sure you agree.

So what does a Texture Image Unit do?

I’m not sure that the Wikipedia article is totally correct; my recollection is that they were originally called texture memory units, and the name refers to a quirk of old 3dfx Voodoo 2 hardware (and possibly none other), which was IIRC the first consumer multitexture hardware available (so it’s the one that got to make all the mistakes), where each currently bound texture had it’s own dedicated memory. So there might be 2mb for textures that would be bound to TMU0, and another 2mb for textures that would be bound to TMU1.

Their purpose is to allow for multitexturing: rendering with more than 1 texture simultaneously bound. If you have 4 of them (and you actually have more but NVIDIA will only expose 4 to the old fixed pipeline) that means that you can draw with up to 4 simultanously bound textures. TMU0 might contain a diffuse map, TMU1 a light map, TMU2 a detail texture, and you would configure the hardware (via glTexEnv calls or a shader) to blend between the textures.

You probably already know this, but it’s worth highlighting. The names GL_TEXTURE0, etc therefore actually are a more accurate reflection of their usage than “TMU0”, etc, because they are used for simultaneously bound textures in your program code.

As for compilers abstracting this: it’s actually nothing to do with compilers. It’s a specification that abstracts the way multitexturing hardware circa 1998 worked.

As I said before, nowadays they’re effectively binding points, something to which you can attach a texture name (an integer returned from glGenTextures) so that you can subsequently reference it in a shader (in the absence of the bindless texture extension, the only way to identify a specific texture is to store the index of a texture unit in a sampler uniform).

Any implementation of OpenGL 3 or later provides at least 48 texture units, i.e. glActiveTexture(GL_TEXTURE0+i) is valid for i between 0 and 47 inclusive. How that corresponds to hardware (if at all) isn’t dealt with by the specification.

Just that. Well, slightly more than that; they act as a go-between for a texture ID and a variety of OpenGL features. E.g. in the absence of direct state access, operations on textures affect the texture bound to the active texture unit. Additionally, sampler objects are bound to texture units rather than textures.

Partly for historical reasons, partly for performance reasons, partly for a combination of those (i.e. when texture units where physical entities, requiring the client to manage the hardware directly would have been more efficient than requiring the implementation to dynamically map high-level semantics onto the available hardware).

This is computing. A reference to a thing is never the thing itself. The names are what they are. As such, you cannot improve upon them, because however “meaningless” the current names may be, any alternative name you invent will be even less meaningful to everyone except (and possibly even including) yourself. You’ll still have to deal with the standard names whenever you read the specification or someone else’s code, or if you ever write code for someone else.

It acts as a container for texture-related state (see tables 23.12 and 23.13 in the OpenGL 4.5 specification).

I see it as a GLSL compiler issue since the programmer does not care which Texture Image Unit [choose your name of choice]is employed since they are identical he/she merely cares that a reference made in OpenGL code can be identified in the GLSL code. A compilers job is to figure out things such as which registers to use, and not bother the programmer. GLSL is not assembly language.

The naming of types IS important and can be amended. C once had just int and unsigned char etc. but now we can be more explicit with int32_t and uint8_t so even a language from ~1978 can stay flexible and be improved.

I don’t know what the Wikipedia concept is; it seems to be talking about specific pieces of hardware. I can only explain what OpenGL says.

OpenGL does not define any such thing as a “texture mapping unit”.

The definition of every word is ultimately circular.

What makes you think you have 4 of them?

So what if it violates some arbitrary rule someone invented. Hardware has limitations, and OpenGL is a hardware abstraction. Therefore, it exposes those limitations to the user, so that the user can work around them.

OK, here’s how things stand.

You can either learn it as is, or you can whine and complain on a forum. But you are not changing it. OpenGL has been around for almost twenty-five years now, and for most of those years, the first texture unit is accessed by the enumerator named “GL_TEXTURE0”. Some guy on a forum is not suddenly going to cause everyone to go, “Hey, he’s right, let’s all rewrite millions of lines of code and change this.”

If the particular names of things truly bother you that much, it’s just a number; you can call it whatever you want. You can even #define sampler2D as something else in GLSL.

But don’t expect the rest of the world to indulge you or change 20+ years of things on your say-so.

“Texture image units” are numbered locations in the OpenGL context to which textures can be bound. In any rendering command, whatever textures were bound to texture image units can be access by any shader stage in that rendering operation. Any textures not bound at the time of the rendering command cannot be accessed by the texture.

A texture image unit doesn’t “do” anything. It’s just an element in an array of bound textures. But those array elements are referenced in shaders. The shader says “fetch me this sample from the 2D texture currently in texture image unit 2”. And the 2D texture bound to texture image unit 2 will have a sample fetched from it.

The GLSL sampler type is simply a placeholder. [It’s an opaque type](https://www.opengl.org/wiki/Opaque Type) that represents a resource that exists outside of the shader. It does not have a “value” in the traditional sense. However, it is a uniform and its “value” must be set, either in the shader (via the layout binding syntax) or in OpenGL code via glUniform1i.

The “value” of a sampler is the texture image unit that it represents. So in the above case, you would use glUniform1i and set the sampler uniform’s value to 2 (or use layout(binding = 2)).

I see it as a GLSL compiler issue since the programmer does not care which Texture Image Unit [choose your name of choice]is employed since they are identical he/she merely cares that a reference made in OpenGL code can be identified in the GLSL code. A compilers job is to figure out things such as which registers to use, and not bother the programmer. GLSL is not assembly language.

It’s not that simple, because GLSL does not exist in a vacuum.

If GLSL automatically assigned texture image unit indices to samplers… how would the OpenGL code that issues the rendering command know which TIU had been assigned to a particular sampler?

Oh sure, you could query the value from GLSL. But that’s stupid, because it means that every time you change shaders, you must rebind all your textures. If you’re rendering with a shadow map, almost every object in the scene will use the same shadow map. So why not have the shader for every object in the scene get its shadow map from the same TIU index?

But if GLSL arbitrarily did TIU assignment, you wouldn’t be able to ensure that the shadow map was a particular index. The way it is now, you can ensure that a particular index is used. You can even establish TIU conventions: unit 0 is the diffuse texture, unit 1 is the normal map, unit 2 is whatever, unit 10 is the shadow map, and so forth.

With your way, you couldn’t do that, since every shader would have its own TIU assignments.

So no, the programmer very much does care which TIUs are assigned to which samplers.

So… how exactly would that improve anything? All that would change is that you would have two ways to say the same thing. And since we have a bunch of code, books, and tutorials written using the current name, and precisely nothing written the new way… what reason would anyone have to use your new terms?

Not to mention, your analogy is flawed. uint8_t does not replace unsigned char, and it is not intended to do so. Same goes for int32_t. These are additions, which fundamentally mean something different. What you’re asking for is just a new name for the same thing. Like having “single_t” mean “float”, so that it would match better with “double”.

[QUOTE=Alfonse Reinheart;1278861]But if GLSL arbitrarily did TIU assignment, you wouldn’t be able to ensure that the shadow map was a particular index. The way it is now, you can ensure that a particular index is used. You can even establish TIU conventions: unit 0 is the diffuse texture, unit 1 is the normal map, unit 2 is whatever, unit 10 is the shadow map, and so forth.

With your way, you couldn’t do that, since every shader would have its own TIU assignments.

So no, the programmer very much does care which TIUs are assigned to which samplers.[/QUOTE]

The abstraction could be done on the OpenGL side of things. GL_TEXTURE0 and sampler2D are just legacy crud. But they don’t have to go away and old code would still be fine. In the same way that in C the register type modifier still exists but modern compilers make it largely obsolete.

In fact I put together my own LRU TIU assigner, just for fun, in my application code. :smiley:

[QUOTE=Clive;1278863]The abstraction could be done on the OpenGL side of things. GL_TEXTURE0 and sampler2D are just legacy crud. But they don’t have to go away and old code would still be fine. In the same way that in C the register type modifier still exists but modern compilers make it largely obsolete.

In fact I put together my own LRU TIU assigner, just for fun, in my application code. :D[/QUOTE]

… So how exactly did that get rid of the “legacy crud” of either GL_TEXTURE0 or sampler2D? You still have to bind the texture to the context, and you still have to have a variable in GLSL that represents a particular texture.

If your mechanism involves a lot of glGetUniformLocation calls or similar such done at draw time, then you’re not helping anything.