You must bind a dummy texture for the texture environment to function. Note that you can use the same dummy texture for multiple stages, if you want to.
I usually do a luminance8 texture because I think that wastes as little memory as possible. Even though all even remotely relevant GL drivers support arbitrary power of two texture sizes, implementations aren’t required to support textures smaller than 64x64, so I use that.
//generate the dummy texture required by ARB_texture_env_combine semantics
//we’ll use that for testing sometimes, so we’ll do something spiffy
ubyte woot[64*64];
for (uint x=0;x<64;++x) for (uint y=0;y<64;++y)
{
ubyte texel;
if (x&2) texel=64; else texel=0;
if (y&2) texel+=63;
if ((x<3)| |(x>60)| |(y<3)| |(y>60)) texel=255;
woot[x+64*y]=texel;
}
glGenTextures(1,&bstate.dummy_texture);
glActiveTextureARB(GL_TEXTURE0_ARB);
glBindTexture(GL_TEXTURE_2D,bstate.dummy_texture);
glTexImage2D(GL_TEXTURE_2D,0,GL_INTENSITY,64,64,0,GL_RED,GL_UNSIGNED_BYTE,woot);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
AFAICT, dummy textures don’t degrade performance. The only real downside is memory footprint. 4k shouldn’t be much of an issue.
You don’t need to send texture coords, you never actually sample the texture, so why should this be bad?