All textures enabled

Would there be any problems by leaving all texturing target enabled. Example:

glEnable(GL_TEXTURE_1D);
glEnable(GL_TEXTURE_2D);
glEnable(GL_TEXTURE_3D);
glEnable(GL_TEXTURE_CUBEMAP);

//And render you textured objects

glBindTexture(GL_TEXTURE_2D, tex1);
drawobject1();

glBindTexture(GL_TEXTURE_3D, tex2);
drawobject2();

etc…

I still dont see the point of enabling individual texture types really.

Having something like
glEnable(GL_TEXTURING);
glDisable(GL_TEXTURING);

seems sufficient. N’est ce pas?

[This message has been edited by V-man (edited 01-05-2003).]

That might work, but you would have to unbind the texture object for that target still.
It’s just easier to enable/disable the targets than to leave them enabled.

I assume there’s a reason it is the way it is, but I don’t know what it could be.

As it is now, you can’t enable all texture types and bind the texture you want. For example, 3D textures have higher priority than 2D textures, so if both are enabled, 3D is the one used. If you enable both and bind a 2D texture, you don’t have a valid 3D texture bound and result is either undefined or a disabled texture unit (don’t know which one it really is, but I think it’s a disabled texture unit).

i dont think theres a problem but if cubemaps are enabled the rest (tex1d,2d,3d) are ignored etc.

I assumed they were mutually exclusive.

Yes, it looks like there is a priority rule.
I think it’s cubemap over 3D over 2D over 1D. (it was somewhere in the spec)

I find this behavior weird.

I could code it so that 2D textures are always enabled, and if 3D or cubemap is needed, I just enabled them, draw, disable …
Right now i enable, draw, disable for texture unit, and for every object I draw.

What did everyone else do in their 3D engine?

/edit/ and if 3D textures were properly supported on everything, I would promote 1D and 2D to 3D textures.
4D textures should become available on Nvidia and ATI. Not sure how that works.

[This message has been edited by V-man (edited 01-06-2003).]

there is a priority rule, its defined, i’ve read it one time…

i know dx does not care about what sort of texture is bound… at least, i only say device->SetTexture(stage,textureobjectptr); if i remember correctly… and dx does not have real 1d textures anymore (they are just wx1… or 1xw? 2d textures…)

i don’t remember the reason for these individual units as well… and don’t want to go to read it… but i think, if it was important, it was, but isn’t very important anymore… seeing dx does not have it, and no one really bothering about it in gl as well… with pixelshading/fragmentprogramming comming up, the actual textures don’t really mather at all anymore. just bind and sample from where you want. do you need to enable stages then? i don’t think you need… don’t actually remember…

blah

anyone remembers the reason?

>>>with pixelshading/fragmentprogramming comming up, the actual textures don’t really mather at all anymore. <<<

If you want to do everything with fragment programs that is!
For my case, I cant just switch to vertex and fragment programs, since its not widespread.
As long as there are intel integrated, sis integrated, and other crap that keep building early 1990 technology with AGP slapped on it.

I will post the reason behind the texture thing when I find it.

Just a little word about 4D texturing : it’s not here now and won’t be before a while.
why ?
1- too much memory cost, we already have alot of problems with 3D texturing.
2- a problem resides with the definition of the fourth texture coordinate, which is now used for homogeneous coordinate.

3 - The applications of 4D textures are quite few.

I think the question has already been answered pretty well, but I just wanted to underscore, Do not do this. It may work on a driver here or there, but it is incorrect behavior under the spec, and compliant implementations will not work.

-Evan

I’m actually not sure how well the question has been answered.

(1) For each texture unit, there is a priority of “texture target” enables: CUBE, 3D, 2D, 1D. The highest priority target that is enabled wins.

(2) The texture object that is bound to that target is used. Note that there is ALWAYS a texture object bound to each target – if you haven’t bound a numbered texture unit, the default “texture object zero” for that target is used.

(3) If the bound texture unit of the selected target is empty (e.g., has never had an image loaded or has been filled with a 0x0 image), texturing is effectively disabled for that texture unit. Similarly, if you don’t have a full set of mipmaps, texturing is effectively disabled. OpenGL does not ever “fall back” to the next highest priority target. If any driver does this, it is broken!

Evan’s answer is right on – don’t do this.

Fragment programs have effectively eliminated the need for texture enables – you reference them explicitly in the program. This behavior is better for drivers, since a driver might want to compile a fragment program differently depending on the type of texture unit. And it wouldn’t want to check for the need to recompile (and recompile if needed) each time you change your texture enables! Of course, if you don’t always use fragment programs, that doesn’t help much…

Pat

Ok, I won’t be doing that little hack.

Instead, I have another question that is sort of related. About texture combiners.

If I setup a state for the combiners, will that state be forever preserved?

Example:

//Draw object 1
glActiveTexture(GL_TEXTURE0);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE);
glTexEnvf(GL_TEXTURE_ENV, GL_COMBINE_RGB, GL_REPLACE);
glTexEnvf(GL_TEXTURE_ENV, GL_SOURCE0_RGB, GL_TEXTURE);
glTexEnvf(GL_TEXTURE_ENV, GL_OPERAND0_RGB, GL_SRC_COLOR);

… stuff …

//Draw Object 2
glActiveTexture(GL_TEXTURE0);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);

… stuff …

//Draw Object 3 WHERE THE COMBINER IS THE SAME AS FOR Object 1
glActiveTexture(GL_TEXTURE0);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE);
NO NEED TO SETUP ANYTHING SINCE STATE IS PRESERVED
…stuff…

The above works I think.

Of course yes, as long as OpenGL is a state machine.

I’m glad to finally see thread all about showing how evil texture targets are.

pbrown:
Of course, if you don’t always use fragment programs, that doesn’t help much…

Actually, it does help only if you use fragment programs exclusively.
In real world application (which usually must target multiple platforms) you have to deal with this:

  1. new ARB FP interface + new texture target scheme (ARB FP, NV FP)

  2. new ARB FP interface + old texture target scheme (ATI_text_fragment_shader)

  3. old state based interface + new texture target scheme (NV RC with NV_texture_shader)

  4. old state based interface + old texture target scheme (NV RC without NV_texture_shader, ARB_tex_env_combine)

Yes, this is ridiculous. New extensions, instead of removing the old problem, have actually amplified the pain.

Another case that begs for OpenGL 2.

It’s no ridiculous if you stick to ARB extensions, which are the recommended ones.

If you work with ATI and NV (and other) extensions, you’ll be able to get features a bit sooner, but it’s up to you to deal with them. As far as I know, nobody told us that vendor-specific extensions were panacea.

vincoof, your recomendation to stick to ARB extensions is rather not helpful, as I would’t like to degrade nv10, nv20, R100 and R200 fragment processing capablities to TNT level.

It seems you have misinterpreted my intention. I really don’t mind at all vendor-specific extensions and HW specific code paths. All I’d want is a bit more sanity when designing them - keeping from introducing differences where they could be avoided.

To clarify things, I’ll summarize my previous post:

  • old texture target scheme is bad today (in GL 1.0 times it might have seemed ok)
  • new texture target scheme is slightly better (although GL2 style texture usage is the right way IMO)
  • the problem is that when you have to include both new and old schemes in your code, you not only get zero benefit from the progress, but it also makes things more complicated then before.

This is what IMO would be the right way:

  1. new ARB FP interface + new texture target scheme (ARB FP, NV FP)

  2. new ARB FP interface + new texture target scheme (remake of ATI_text_fragment_shader)

  3. new ARB FP interface + new texture target scheme (textual version of NV RC & TS)

For ARB texturing, there’s the ARB_texture_env_combine, ARB_texture_env_crossbar, ARB_texture_env_dot3, etc extensions that mean you can use post-TNT fragment texture application.

For the high end (and text based shaders) there’s ARB_fragment_program.

In the list you give, I would only worry about cases “1. ARB_fragment_program” and “4. ARB extensions dealing with old texture target model.”

In the list you give, I would only worry about cases “1. ARB_fragment_program” and “4. ARB extensions dealing with old texture target model.”

The problem with that is that, thanks to nVidia’s refusal to implement ARB_texture_env_crossbar/the ARB’s unwillingness to make an extension nVidia could implement, no GeForce card supports crossbar. And, thanks to that, you have cut off a large portion of the population of lower-end cards.

Not only that, only Radeon 9500/9700’s support ARB_fragment_program at the moment. The ARB extension path doesn’t allow for any dependent texture accessing; it is much too limitted in this respect. As such, GeForce3/4’s and Radeon 8500/9000’s hardware are not being used to the level that they could be. Indeed, these cards look no better than an equivalent GeForce2 or Radeon 7500.

The problem with that is that, thanks to nVidia’s refusal to implement ARB_texture_env_crossbar/the ARB’s unwillingness to make an extension nVidia could implement, no GeForce card supports crossbar.

NVIDIA cards do not support ARB_texture_env_crossbar because NVIDIA already implemented its own crossbar into NV_texture_env_combine4 (supported by almost all NVIDIA cards). So, yes GeForce cards support crossbar, but because the spec is a little bit different they do not support the ARB version of the spec.