I’m wondering why the following code generates an OpenGL error (unsupported texture unit)…GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS returns 192, function has error with glActiveTexture(GL_TEXTURE0+32)…The OpenGL man pages states for glActiveTexture:
“Specifies which texture unit to make active. The number of texture units is implementation dependent, but must be at least 80. texture must be one of GL_TEXTUREi, where i ranges from 0 - (GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS - 1). The initial value is GL_TEXTURE0.”
I’m using ARB_debug_output and gdebugger…they both catch the error (invalid texture unit).
Nevermind that I’m using glEnable/glDisable with GL_TEXTURE_*…I was just rereading that this is deprectated…great reminder! Anyhow, glActiveTexture is indeed causing the error.
From the 4.0 spec…
“ActiveTexture generates the error INVALID_ENUM if an invalid texture is specified. texture is a symbolic constant of the form TEXTUREi, indicating that texture unit i is to be modified. The constants obey TEXTUREi = TEXTURE0+i (i is in the range 0 to k - 1, where k is the value of MAX_COMBINED_TEXTURE_IMAGE_UNITS).” pg. 173.
“All active shaders combined cannot use more than the value of MAX_COMBINED_TEXTURE_IMAGE_UNITS texture image units. If more than one pipeline stage accesses the same texture image unit, each such access counts separately against the MAX_COMBINED_TEXTURE_IMAGE_UNITS limit.” pg. 89.
On page 371, MAX_COMBINED_TEXTURE_IMAGE_UNITS is guaranteed to be at least 80. I got 192 from my own query. Of course, it fails at 32…well below the spec. I understand that we don’t need more enums (GL_TEXTURE1045, etc.), so glActiveTexture might be using an old check…
I guess I should clarify what I’m trying to do. It’s basically just cleaning up after arbitrarily executing a block of OpenGL calls. I’m not actually trying to use as many texture units as I can, but it seems that the following code should execute without error according to the spec:
int maxTextureUnits = 0;
glGetIntegerv(GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS, &maxTextureUnits);
for (int i = 0; i < maxTextureUnits; i++)
{
glActiveTexture(GL_TEXTURE0+i);
}
Now, I’m thinking that the normal way of using the GL is to just use glActiveTexture(…) and glBindTexture(…) and to set the sampler in the shader program the texture unit you used. There’s no cleanup afterwards. If that’s the case though, it looks like I should be able to use texture unit 79 if I wanted to and set my sampler to 79 and it should just work. Unless there’s something here that I’m not considering…
But, if you’re using the compatibility context, there might be merit to making sure that all the texture units are disabled and unbound to ensure that the fixed function code is not affected by using the new pipeline.