Multitexturing in OpenGL 3.1

I’m having trouble implementing multitexturing in OpenGL 3.1. I’m sure this is at least partly because I’ve never worked with samplers before (I learned how to use textures from this tutorial, which explained mipmapping well but skipped over sampler objects).

My current setup is this: I have two samplers, l_sampler1 and l_sampler2, global to the main program. Not the best, but I’m just trying to figure out how they work before I package them up in classes. In the initialization code, I have:

glGenSamplers(1, &l_sampler1);
glGenSamplers(1, &l_sampler2);
glBindSampler(unsigned int(0), l_sampler1); // Bind sampler 1 to texture unit 0?
glBindSampler(unsigned int(1), l_sampler2); // Bind sampler 1 to texture unit 1?
glSamplerParameteri(l_sampler1, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glSamplerParameteri(l_sampler1, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glSamplerParameteri(l_sampler2, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glSamplerParameteri(l_sampler2, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
...
// later on in the initialization:
MirrorShader.updateUniform("renderedTexture", unsigned int(0)); // Upload texture unit 0 to the fragment shader?
MirrorShader.updateUniform("mirrorTexture", unsigned int(1)); // Upload texture unit 1 to the fragment shader?

And in the render loop I have:

glActiveTexture(GL_TEXTURE0);
// Generate, bind, and glTexImage texture1
glActiveTexture(GL_TEXTURE1);
// Generate, bind, and glTexImage texture2
glActiveTexture(GL_TEXTURE0); // I dunno why, for luck maybe? :)
// bind texture1, and pray that it somehow makes the whole thing work
// render stuff

What happens is that only the last bound texture gets used, no matter which sampler I access in the fragment shader. I’m not sure if the same texture is being uploaded to both units or both samplers are linked to the first unit or what.

I think I’m getting the arguments right, though. Initially I thought it was glBindSampler(GL_TEXTURE0, samplerId) rather than glBindSampler(0, samplerId), but changing it didn’t solve the problem.

Where am I going wrong?

As a side note, that Shader.updateUniform function is an overloaded one I made that accepts, among other things, either a float or an unsigned int as an argument. When I tried the above code with MirrorShader.updateUniform(“renderedTexture”, GL_TEXTURE0) the compiler (MSVC2010) complained about an ambiguous call to the overloaded function, claiming it couldn’t determine whether I wanted to pass a float or a uint. Which is GL_TEXTURE0? I thought it was a uint.

I’m having trouble implementing multitexturing in OpenGL 3.1.

That’s probably because you’re using sampler objects, which are an OpenGL 3.3 feature. I’m surprised your code is even executing on an implementation that doesn’t support 3.3.

Generate, bind, and glTexImage texture1

Why are you generating a texture every frame?

Which is GL_TEXTURE0? I thought it was a uint.

Usually, GL_TEXTURE0 is a macro. And therefore, it is just a replacement for a literal value. Integer literals start out as signed integers, but none of your functions take a signed integer. Therefore the compiler must choose between two implicit conversions: to unsigned integer or to floats. C++ compilers are not allowed to decide which implicit conversion is “better”, so without an explicit conversion on your part, the C++ compiler will error.

Hmm. According to the OGL extensions viewer, my particular graphics card (Intel HD 3000) supports some OGL 3.3 features, including samplers. Is there a way I could get them to work in a 3.1 context, as long as the graphics card supports it? Or if not, how would I go about multitexturing in OpenGL 3.1?

As for generating a texture every frame, I’m rendering to a framebuffer for reflections stuff. I probably don’t need to generate it every frame, just bind it and fill it, but as long as it’s not breaking the code I’ll just try to get multitexturing working before I shift things around, unless not generating it every frame is a part of multitexturing in an OpenGL 3.1 context.

Hmm. According to the OGL extensions viewer, my particular graphics card (Intel HD 3000) supports some OGL 3.3 features, including samplers.

Then you’re accessing it via ARB_sampler_objects.

It’s also hard to know if your program works when you hide the important details behind comments. Whatever’s back there is probably causing your problem. That’s why it’s important to stop with the generating it every frame part. Simplify the code as much as possible.

[QUOTE=fiodis;1253480]Hmm. According to the OGL extensions viewer, my particular graphics card (Intel HD 3000) supports some OGL 3.3 features, including samplers. Is there a way I could get them to work in a 3.1 context, as long as the graphics card supports it? Or if not, how would I go about multitexturing in OpenGL 3.1?
[/QUOTE]
Why are you using samplers in the first place? Are you confusing sampler objects (glGenSamplers() etc) with the GLSL sampler types (sampler2D etc)? They aren’t the same thing.

A sampler object allows texture state (filters, wrap mode, etc) to be encapsulated in a separate object, so that you can modify the parameters without having to call glTexParameter on each texture individually. When a sampler object is bound to a texture unit, the sampler’s state (set via glSamplerParameter) overrides the texture’s state (set via glTexParameter) for any texture access performed via that texture unit.

GLSL sampler types are “handles” used to identify texture units. The texture unit doesn’t need to have a sampler object associated with it. Whether or not a sampler object is attached to the texture unit determines whether texture parameters are taken from the sampler object or the texture, but the code will work either way.

The only situation in which samplers are strictly required is if you want to bind the same texture to multiple texture units simultaneously, with different units using different texture parameters.

The only sources I could find regarding multitexturing with a 3.x+ context were this OGL 3.3 tutorial and this SO thread. Both use samplers, which I’d never used before, so I jumped to the incorrect assumption that they were needed for multitexturing. If they’re not actually needed, all the better for me with my OGL 3.1 context. :slight_smile:

But that still leaves my original question - how does one accomplish multitexturing in an OpenGL 3.1 context, without using deprecated functions? I’m probably missing something, but this is my understanding of it at the moment:

  1. Call glActiveTexture, then bind a texture and upload it with glTexImage.
  2. Call glActiveTexture with a different texture unit (GL_TEXTURE1, for example), and bind a different texture and upload it with glTexImage.
  3. Call glUniform with the location of the sampler2D in the fragment shader and the GLuint texture unit it’s associated with.
  4. Render. I’m a bit confused here more than with the other bits. What texture should be bound when you render? Which texture unit should be active? Does it matter, if I’ve already passed both textures to OpenGL with glTexImage (assuming I’ve not passed any new textures since)?

Hopefully that better explains the exact question I have.

upload it with glTexImage

If you remove that code and go with a static texture, your code will probably make a lot more sense and be less buggy. Create the texture and its storage and data first. Then leave it that way.

Call glUniform with the location of the sampler2D in the fragment shader and the GLuint texture unit it’s associated with.

Why did you bind two textures if you only have a single sampler uniform? Are you doing multitexture (accessing multiple textures in a single shader) or do you simply have multiple separate texture objects?

What texture should be bound when you render?

The texture(s) you want to render with.

Which texture unit should be active?

If you’re talking about glActiveTexture, it’s irrelevant. That’s just a switch that tells OpenGL what texture unit the glTex* and glBindTexture calls are talking about. It’s best to think of glActiveTextures as a global parameter that all of those functions take.

I will look into using a static texture. I have two sampler2D uniforms in the fragment shader. Sorry, I should have used “a” rather than “the”. I’m multitexturing, that is using data from two different textures in the same shader at the same time, so if I understand correctly when I render I should have this:

glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textureHandle0);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, textureHandle1);

Then render. And I could have those glActiveTexture, glBindTexture couples in any order. Correct?

EDIT - At last, got it to work. It turned out I had to upload the texture units to the shader as ints, not unsigned ints. Thanks everyone for your help, especially that tip about not needing samplers. :slight_smile:

[QUOTE=fiodis;1253518]But that still leaves my original question - how does one accomplish multitexturing in an OpenGL 3.1 context, without using deprecated functions? I’m probably missing something, but this is my understanding of it at the moment:

  1. Call glActiveTexture, then bind a texture and upload it with glTexImage.
  2. Call glActiveTexture with a different texture unit (GL_TEXTURE1, for example), and bind a different texture and upload it with glTexImage.
  3. Call glUniform with the location of the sampler2D in the fragment shader and the GLuint texture unit it’s associated with.
  4. Render. I’m a bit confused here more than with the other bits. What texture should be bound when you render? Which texture unit should be active? Does it matter, if I’ve already passed both textures to OpenGL with glTexImage (assuming I’ve not passed any new textures since)?[/QUOTE]

Prior to the addition of glActiveTexture(), only one texture could be bound at any given time. Any texture state changes or queries applied to that texture. Rendering could only read from that texture. And so on.

When support for multiple texture units was added, rather than changing every texture-related function to accept an extra argument to indicate which texture it applied to, the notion of an “active” texture unit was introduced.

With a couple of exceptions, any OpenGL calls which relate to textures affect the active texture unit as set by the most recent glActiveTexture call. This includes glBindTexture() to bind a specific texture “name” to the active texture unit, as well as glTexImage, glTexParameter, etc. It also includes glEnable/glDisable/glIsEnabled calls for texture-related flags, glGet calls for texture-related state, matrix operations when the matrix mode is GL_TEXTURE, and so on. Any state which relates to textures is replicated for each texture unit, and any queries or changes apply to the active texture unit.

The exceptions are that glTexCoordPointer and gl{Enable,Disable}ClientState(GL_TEXTURE_COORD_ARRAY) affect the texture unit set by glClientActiveTexture rather than glActiveTexture, and glTexCoord always affects texture unit 0 (glMultiTexCoord may be used to provide coordinates for any texture unit, including unit 0).

It doesn’t matter which texture unit is active at the time that a drawing command is issued. When using shaders, what matters is which texture unit IDs are stored in the sampler uniforms used by the shader. With the fixed-function pipeline, any enabled texture units are combined using their glTexEnv() modes.

Thanks for the comprehensive explanation. :slight_smile: I wonder, though, why texture unit IDs are of type GLint and not GLuint.

Because it wasn’t until OpenGL 3.0, 5 years after GLSL first hit in ARB_shader_objects, that OpenGL got glUniform1ui.