Multitexture without textures

Okay, it sounds silly using multitexturing without textures, but when using GL_COMBINE with something like GL_PRIMARY_COLOR and GL_PREVIOUS I have no need for a texture. The problem is that I don’t know how to keep the texture unit active without binding a texture to it… which is kinda a waste.

Is there any way to use a texture unit without binding a texture to it, or perhaps keeping it as efficient as possible?

Why not use 2 pass rendering w blending etc?

2 pass rendering defeats the whole purpose of multitexturing.

If you would like to have some context to my question, I plan to use it as part of my generic 4-unit bump mapping routine: (probably beyond the beginner forum level)

Unit 0

  • Texture: normal map
  • Mode: DOT3_RGB(TEXTURE, TEXTURE1)
    Unit 1
  • Texture: normalization cube map with light vector coordinates
  • Mode: MODULATE(CONSTANT(light diffuse color), PREVIOUS)
    Unit 2
  • Texture: ???
  • Mode: ADD(PRIMARY_COLOR, PREVIOUS)
    Unit 3
  • Texture: surface diffuse color map
  • Mode: MODULATE

You must bind a dummy texture for the texture environment to function. Note that you can use the same dummy texture for multiple stages, if you want to.

I usually do a luminance8 texture because I think that wastes as little memory as possible. Even though all even remotely relevant GL drivers support arbitrary power of two texture sizes, implementations aren’t required to support textures smaller than 64x64, so I use that.

//generate the dummy texture required by ARB_texture_env_combine semantics
//we’ll use that for testing sometimes, so we’ll do something spiffy
ubyte woot[64*64];
for (uint x=0;x<64;++x) for (uint y=0;y<64;++y)
{
ubyte texel;
if (x&2) texel=64; else texel=0;
if (y&2) texel+=63;

  if ((x<3)&#0124; &#0124;(x>60)&#0124; &#0124;(y<3)&#0124; &#0124;(y>60)) texel=255;

  woot[x+64*y]=texel;

}

glGenTextures(1,&bstate.dummy_texture);

glActiveTextureARB(GL_TEXTURE0_ARB);
glBindTexture(GL_TEXTURE_2D,bstate.dummy_texture);

glTexImage2D(GL_TEXTURE_2D,0,GL_INTENSITY,64,64,0,GL_RED,GL_UNSIGNED_BYTE,woot);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);

AFAICT, dummy textures don’t degrade performance. The only real downside is memory footprint. 4k shouldn’t be much of an issue.

You don’t need to send texture coords, you never actually sample the texture, so why should this be bad?

Semi-OT

Originally posted by hh10k:
[b]Unit 0

  • Texture: normal map
  • Mode: DOT3_RGB(TEXTURE, TEXTURE1)
    Unit 1
  • Texture: normalization cube map with light vector coordinates
  • Mode: MODULATE(CONSTANT(light diffuse color), PREVIOUS)
    Unit 2
  • Texture: ???
  • Mode: ADD(PRIMARY_COLOR, PREVIOUS)
    Unit 3
  • Texture: surface diffuse color map
  • Mode: MODULATE[/b]
    On first generation Radeon cards, you can fold that to three environments (R100 supports MAD ).

I’ve already got that and register combiners ready so that NV and Radeon cards are happy, but I need a generic multitexture method for all those ‘others’ out there… even though this needs GL_ARB_texture_env_crossbar or GL_NV_texture_env_combine4 for GL_TEXTURE1. I’ll see how it performs against a multi-pass method later.

Originally posted by zeckensack:
Even though all even remotely relevant GL drivers support arbitrary power of two texture sizes, implementations aren’t required to support textures smaller than 64x64, so I use that.

The spec doesn’t say anything about support for textures smaller than 64 being optional. 64 is the minimum value required to be returned by glGetInteger/GL_MAX_TEXTURE_SIZE, so maybe you’ve just misinterpreted the purpose of that value. Any power of two up to that limit must be supported.