It is neither. glActiveTexture sets the current texture image unit, used by all subsequent commands that deal with texture manipulation. Setting the active texture image unit to zero is not necessary for the execution of the following VAO binding and rendering commands. But that doesn’t mean it is a mistake.
Presumably, the point is to return the active texture unit to a known, well-understood state. After all, the active texture unit is context state, so it is possible that other code may expect it to be on texture unit 0. And such code would not do the right thing if it were not reset to zero.
Thus, after issuing a Draw command, the caller knows that, while new textures may have been bound to any number of texture image units, the active texture unit will always be reset to 0.
To be honest, I’d be far more concerned with the generally horrible quality of code used by the tutorial. Creating a std::stringstream for every texture in every rendering call? Has this person not heard of std::to_string? Needlessly copying the texture’s name every time you bind a texture? And why not pre-process all of this stuff? After all, you’d be generating the exact same strings (and uniform locations) every single time through the rendering call.
Also, "material." + name + number strongly implies that the final name for the sampler uniform will be “material.some_name1”. Which is illegal; samplers are opaque types, and opaque types cannot be members of uniform blocks. And even if they could, members of uniform blocks don’t have locations.
And the name of an identifier (like a non-block uniform value) cannot include the . character.
Apparently, nVidia’s implementation allows samplers within structures. Which suggests that the author goes by “works on my system” rather than reading the standard.
What extension specification allows that? Because I checked both NV_bindless_texture and ARB_bindless_texture, and neither one permits it. They permit 64-bit integers in structs, but you can’t declare them as actual samplers in structs.
Not surprising, considering the horrible quality of the source code. Though I would have hoped that someone who cared enough to write a tutorial would also have more than a nodding acquaintance with the spec.
Who said anything about a specification? nVidia’s implementations have always been somewhat fault-tolerant (i.e. accepting things which should be an error).
No, there’s no “if” in that statement. I just took a look at it and encountered this picture. It’s wrong; the Geometry shader comes after tessellation, not before. That’s not a minor mistake someone can make. That’s someone who has little familiarity with the specification.
Not to mention, the website itself is one of those garbage websites based so intimately on JavaScript that you can’t even middle-click on a link to bring it up in a new tab.
Well, there’s mine, accessible through my signature. But there are also others, as can be found through the OpenGL Wiki.