PDA

View Full Version : All textures enabled



V-man
01-05-2003, 12:05 PM
Would there be any problems by leaving all texturing target enabled. Example:

glEnable(GL_TEXTURE_1D);
glEnable(GL_TEXTURE_2D);
glEnable(GL_TEXTURE_3D);
glEnable(GL_TEXTURE_CUBEMAP);

//And render you textured objects

glBindTexture(GL_TEXTURE_2D, tex1);
drawobject1();

glBindTexture(GL_TEXTURE_3D, tex2);
drawobject2();

etc...

I still dont see the point of enabling individual texture types really.

Having something like
glEnable(GL_TEXTURING);
glDisable(GL_TEXTURING);

seems sufficient. N'est ce pas?

[This message has been edited by V-man (edited 01-05-2003).]

NitroGL
01-05-2003, 12:21 PM
That might work, but you would have to unbind the texture object for that target still.
It's just easier to enable/disable the targets than to leave them enabled.

Bob
01-05-2003, 12:23 PM
I assume there's a reason it is the way it is, but I don't know what it could be.

As it is now, you can't enable all texture types and bind the texture you want. For example, 3D textures have higher priority than 2D textures, so if both are enabled, 3D is the one used. If you enable both and bind a 2D texture, you don't have a valid 3D texture bound and result is either undefined or a disabled texture unit (don't know which one it really is, but I think it's a disabled texture unit).

zed
01-05-2003, 06:58 PM
i dont think theres a problem but if cubemaps are enabled the rest (tex1d,2d,3d) are ignored etc.

knackered
01-06-2003, 03:16 AM
I assumed they were mutually exclusive.

V-man
01-06-2003, 08:54 AM
Yes, it looks like there is a priority rule.
I think it's cubemap over 3D over 2D over 1D. (it was somewhere in the spec)

I find this behavior weird.

I could code it so that 2D textures are always enabled, and if 3D or cubemap is needed, I just enabled them, draw, disable ...
Right now i enable, draw, disable for texture unit, and for every object I draw.

What did everyone else do in their 3D engine?

/*edit*/ and if 3D textures were properly supported on everything, I would promote 1D and 2D to 3D textures.
4D textures should become available on Nvidia and ATI. Not sure how that works.



[This message has been edited by V-man (edited 01-06-2003).]

davepermen
01-06-2003, 09:09 AM
there is a priority rule, its defined, i've read it one time..

i know dx does not care about what sort of texture is bound.. at least, i only say device->SetTexture(stage,textureobjectptr); if i remember correctly.. and dx does not have real 1d textures anymore (they are just wx1.. or 1xw? 2d textures..)

i don't remember the reason for these individual units as well.. and don't want to go to read it.. but i think, if it was important, it was, but isn't very important anymore.. seeing dx does not have it, and no one really bothering about it in gl as well.. with pixelshading/fragmentprogramming comming up, the actual textures don't really mather at all anymore. just bind and sample from where you want. do you need to enable stages then? i don't think you need.. don't actually remember..

blah http://www.opengl.org/discussion_boards/ubb/biggrin.gif

anyone remembers the reason?

V-man
01-06-2003, 11:24 AM
>>>with pixelshading/fragmentprogramming comming up, the actual textures don't really mather at all anymore. <<<

If you want to do everything with fragment programs that is!
For my case, I cant just switch to vertex and fragment programs, since its not widespread.
As long as there are intel integrated, sis integrated, and other crap that keep building early 1990 technology with AGP slapped on it.

I will post the reason behind the texture thing when I find it.

vincoof
01-06-2003, 11:54 AM
Just a little word about 4D texturing : it's not here now and won't be before a while.
why ?
1- too much memory cost, we already have alot of problems with 3D texturing.
2- a problem resides with the definition of the fourth texture coordinate, which is now used for homogeneous coordinate.

Humus
01-06-2003, 01:24 PM
3 - The applications of 4D textures are quite few.

ehart
01-06-2003, 03:14 PM
I think the question has already been answered pretty well, but I just wanted to underscore, Do not do this. It may work on a driver here or there, but it is incorrect behavior under the spec, and compliant implementations will not work.

-Evan

pbrown
01-06-2003, 09:06 PM
I'm actually not sure how well the question has been answered.

(1) For each texture unit, there is a priority of "texture target" enables: CUBE, 3D, 2D, 1D. The highest priority target that is enabled wins.

(2) The texture object that is bound to that target is used. Note that there is *ALWAYS* a texture object bound to each target -- if you haven't bound a numbered texture unit, the default "texture object zero" for that target is used.

(3) If the bound texture unit of the selected target is empty (e.g., has never had an image loaded or has been filled with a 0x0 image), texturing is effectively disabled for that texture unit. Similarly, if you don't have a full set of mipmaps, texturing is effectively disabled. OpenGL does not ever "fall back" to the next highest priority target. If any driver does this, it is broken!

Evan's answer is right on -- don't do this.

Fragment programs have effectively eliminated the need for texture enables -- you reference them explicitly in the program. This behavior is better for drivers, since a driver might want to compile a fragment program differently depending on the type of texture unit. And it wouldn't want to check for the need to recompile (and recompile if needed) each time you change your texture enables! Of course, if you don't always use fragment programs, that doesn't help much...

Pat

V-man
01-07-2003, 07:53 AM
Ok, I won't be doing that little hack.

Instead, I have another question that is sort of related. About texture combiners.

If I setup a state for the combiners, will that state be forever preserved?

Example:

//Draw object 1
glActiveTexture(GL_TEXTURE0);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE);
glTexEnvf(GL_TEXTURE_ENV, GL_COMBINE_RGB, GL_REPLACE);
glTexEnvf(GL_TEXTURE_ENV, GL_SOURCE0_RGB, GL_TEXTURE);
glTexEnvf(GL_TEXTURE_ENV, GL_OPERAND0_RGB, GL_SRC_COLOR);

.... stuff ...

//Draw Object 2
glActiveTexture(GL_TEXTURE0);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);

.... stuff ...


//Draw Object 3 WHERE THE COMBINER IS THE SAME AS FOR Object 1
glActiveTexture(GL_TEXTURE0);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE);
NO NEED TO SETUP ANYTHING SINCE STATE IS PRESERVED
...stuff...


The above works I think.

vincoof
01-07-2003, 07:58 AM
Of course yes, as long as OpenGL is a state machine.

MZ
01-07-2003, 10:55 AM
I'm glad to finally see thread all about showing how evil texture targets are.

pbrown:
Of course, if you don't always use fragment programs, that doesn't help much...
Actually, it does help only if you use fragment programs exclusively.
In real world application (which usually must target multiple platforms) you have to deal with this:

1. new ARB FP interface + new texture target scheme (ARB FP, NV FP)

2. new ARB FP interface + old texture target scheme (ATI_text_fragment_shader)

3. old state based interface + new texture target scheme (NV RC with NV_texture_shader)

4. old state based interface + old texture target scheme (NV RC without NV_texture_shader, ARB_tex_env_combine)


Yes, this is ridiculous. New extensions, instead of removing the old problem, have actually amplified the pain.

Another case that begs for OpenGL 2.

vincoof
01-08-2003, 12:49 AM
It's no ridiculous if you stick to ARB extensions, which are the *recommended* ones.

If you work with ATI and NV (and other) extensions, you'll be able to get features a bit sooner, but it's up to you to deal with them. As far as I know, nobody told us that vendor-specific extensions were panacea.

MZ
01-08-2003, 04:47 AM
vincoof, your recomendation to stick to ARB extensions is rather not helpful, as I would't like to degrade nv10, nv20, R100 and R200 fragment processing capablities to TNT level.

It seems you have misinterpreted my intention. I really don't mind at all vendor-specific extensions and HW specific code paths. All I'd want is a bit more sanity when designing them - keeping from introducing differences where they could be avoided.

To clarify things, I'll summarize my previous post:
- old texture target scheme is bad today (in GL 1.0 times it might have seemed ok)
- new texture target scheme is slightly better (although GL2 style texture usage is the right way IMO)
- the problem is that when you have to include both new and old schemes in your code, you not only get zero benefit from the progress, but it also makes things _more_ complicated then before.


This is what IMO would be the right way:

1. new ARB FP interface + new texture target scheme (ARB FP, NV FP)

2. new ARB FP interface + new texture target scheme (remake of ATI_text_fragment_shader)

3. new ARB FP interface + new texture target scheme (textual version of NV RC & TS)

jwatte
01-08-2003, 09:19 AM
For ARB texturing, there's the ARB_texture_env_combine, ARB_texture_env_crossbar, ARB_texture_env_dot3, etc extensions that mean you can use post-TNT fragment texture application.

For the high end (and text based shaders) there's ARB_fragment_program.

In the list you give, I would only worry about cases "1. ARB_fragment_program" and "4. ARB extensions dealing with old texture target model."

Korval
01-08-2003, 09:44 AM
In the list you give, I would only worry about cases "1. ARB_fragment_program" and "4. ARB extensions dealing with old texture target model."

The problem with that is that, thanks to nVidia's refusal to implement ARB_texture_env_crossbar/the ARB's unwillingness to make an extension nVidia could implement, no GeForce card supports crossbar. And, thanks to that, you have cut off a large portion of the population of lower-end cards.

Not only that, only Radeon 9500/9700's support ARB_fragment_program at the moment. The ARB extension path doesn't allow for any dependent texture accessing; it is much too limitted in this respect. As such, GeForce3/4's and Radeon 8500/9000's hardware are not being used to the level that they could be. Indeed, these cards look no better than an equivalent GeForce2 or Radeon 7500.

vincoof
01-08-2003, 10:08 AM
The problem with that is that, thanks to nVidia's refusal to implement ARB_texture_env_crossbar/the ARB's unwillingness to make an extension nVidia could implement, no GeForce card supports crossbar.
NVIDIA cards do not support ARB_texture_env_crossbar because NVIDIA already implemented its own crossbar into NV_texture_env_combine4 (supported by almost all NVIDIA cards). So, yes GeForce cards support crossbar, but because the spec is a little bit different they do not support the *ARB* version of the spec.

zed
01-08-2003, 06:37 PM
texture_crossbar + nvidia cvards has been discussed before here

anyways its part of opengl1.4 thus it should be supported on your nvidia card (even if its in software only)
dont check the extension string + 'see' it doesnt exist + assume it doesnt exist but set up the extension pointers (hmm maybe it doesnt need pointers) + try it out

Humus
01-08-2003, 08:28 PM
Crossbar runs just fine in hardware on all nVidia cards from TNT2 at least.
To use it, just check for either of GL_ARB_texture_env_crossbar or GL_NV_texture_env_combine4. The crossbar in tex_env_combine4 is identical to the ARB functionality in the cases that matters.

vincoof
01-09-2003, 01:36 AM
anyways its part of opengl1.4 thus it should be supported on your nvidia card (even if its in software only)
If supported in HW by NV_texture_env_combine4, it can *not* be supported in software by ARB_texture_env_crossbar since both NV and ARB extensions use the same tokens TEXTURE<n>_ARB. In other words, it's not possible to tell the GL "I want to use NV version" or "I want to use ARB version", you can just tell "I want to use crossbar".


The crossbar in tex_env_combine4 is identical to the ARB functionality in the cases that matters.
There's a slight difference in fact, though it's really minor. It's about what happens when accessing a texture unit that is not bound correctly.

Tom Nuydens
01-09-2003, 02:59 AM
Originally posted by vincoof:
There's a slight difference in fact, though it's really minor. It's about what happens when accessing a texture unit that is not bound correctly.

That's why Humus said "in the cases that matter". Crossbar says that incorrectly bound textures disable texture blending on that texture unit. Combine4 says that the texture returns all white texels. Either way, this isn't supposed to happen in a well-written application, so it doesn't matter.

Note that crossbar was made part of the core in OpenGL 1.4. The spec now says that the results in the above case are "undefined", so both the crossbar and the combine4 behavior is considered correct.

-- Tom

pbrown
01-10-2003, 06:03 PM
Originally posted by Tom Nuydens:
Note that crossbar was made part of the core in OpenGL 1.4. The spec now says that the results in the above case are "undefined", so both the crossbar and the combine4 behavior is considered correct.

-- Tom

IMO -- This is what the ARB should have done when standardizing ARB_texture_env_crossbar. All this mess over what happens in a case no one cares about, anyway.

As far as programming goes, feel free to use the crossbar functionality if you find:

- OpenGL 1.4 or better
- ARB_texture_env_crossbar
- NV_texture_env_combine4

The enums are the same, and the functionality is the same except for the goofy "what if you reference a bad texture" case.

V-man
01-11-2003, 08:56 AM
I remember Matt saying that the ARB version of crossbar was incompatible with NV hardware, so they defined their own extension.

I'm sure there was some other valid reason.

Not a serious problem, so I dont mind

zed
01-11-2003, 10:47 AM
>>- ARB_texture_env_crossbar
- NV_texture_env_combine4

The enums are the same, and the functionality is the same<<

can u go A+B * C+D with crossbar?
according to what ive read u cant, crossbar is just ARB_combine with access to other texture units aint it?

Humus
01-11-2003, 11:46 AM
Yes, the combine4 defines more functionality beyond crossbar. The crossbar part though is equivalent to the ARB crossbar except for bad textures.

pbrown
01-11-2003, 05:45 PM
Originally posted by V-man:
I remember Matt saying that the ARB version of crossbar was incompatible with NV hardware, so they defined their own extension.

I'm sure there was some other valid reason.

Not a serious problem, so I dont mind

If I recall correctly, NV_texture_env_combine4 came out way before ARB_texture_env_crossbar. (I was working for Intel at the time...) This incompatibility was definitely an issue when crossbar was standardized.

pbrown
01-11-2003, 05:48 PM
Originally posted by zed:
>>- ARB_texture_env_crossbar
- NV_texture_env_combine4

The enums are the same, and the functionality is the same<<

can u go A+B * C+D with crossbar?
according to what ive read u cant, crossbar is just ARB_combine with access to other texture units aint it?

I meant that the crossbar functionality was the same.

At least in some respects, NV_texture_env_combine4 (from Riva TNT) is still more powerful than combine/crossbar.

You are correct that all crossbar does is give you access to other units when using combine.