Materials, DOT3 and Combiners

Hello all, this is my first post, so I’m not sure I’m ofllowing the rules… excuse me otherwise.

I have implemented my own Material class, which allows for one or more diffuse maps, reflection and bump. I’m using env combiners for the basic operations…

My main problem comes, when I have to apply the bump mapping… due to DOT3 making grayscale operations, I put the normal map in the first TU, and the diffuse texture in the second (when there’s a bump map, that is)… now my problem is, when I’m applying a bump map this way, I’m losing all material properties, because the material properties (that’s ambient, diffuse, specular, emission and shininess) become grayscale in this unit, and there doesn’t seem to be an straightforward way to read this first stage (before any texture is applied) from the second TU.

Uhm… I don’t think I’m being clear on this… lemme try to be clearer:

  • Correct:

    • Lighting is applied, and material properties along
    • I modulate the diffuse texture on the first TU by using “Previous” as source 0, and “texture” as source 1, so the material color after lighting remains
  • Incorrect:

    • Lighting is applied, material properties etc.
    • I do a DOT3 mapping with the normal map
    • I modulate the diffuse texture, but using “previous” as source0 and texture as source 1, gets source0 as a grayscale image, so I lose all the lighting color

Hope that made it clearer… can I use the lighting properties and material on the second TU (maybe as a third source) to achieve this, using env combiners?

If not, what do you recommend to it on a single pass? (I could easily modulate on a second pass render, but that’s not what I want)

As for me, I do it another way.

I use vertex programs for per-vertex setup, where I calculate vertex color. This saves the material properties (probably… anyway, it looks like it does).

Then I use texture shaders,register combiners, etc…

Mail me if you wish the code. E-mail is in the profile.

Env. combiners are quite deranged, erm, limited.
It´s quite hard to get a lot of stuff running with them. If you are on a Gf 3 or 4, use register combiners, they are a lot more powerfull. On Gf FX or Radeon 9500+ use ARB_fragment_program, that will make you a lot happier

Jan.

If you can get away with using the combiners on GeForceFX, it will be faster than using arbfp1.

Hi,

You can use GL_PRIMARY_COLOR as source instead of GL_PREVIOUS to refer to the untextured color of the fragment. I’m not sure you can make it fit two texture stages, though, you might need a third stage to modulate with the material color.

A better alternative would be to include the material color in the diffuse textures.

-Ilkka

Well, although all sugestions are nice, but I was trying to make another way… I don’t want this NV-specific so I don’t want to use reg. combiners, and I want to leave fragment programs available for special effects (i’m not sure, but I don’t think you can bind two fragment programs at the same time, right?). I want the bump map to be available as a standard material property, and let fragment programs for SFX or whatnot.

Also, although the target specs are GF3+, I want to have as default (that’s what I have now), a diffuse map, reflection map, bump map and lightmap on a single pass (that’s 4 texture stages), so I don’t think it’d be wise to add another one to colorize the material… if it’s not possible, I’ll try to think what’s better to sacrifice… either the fragment program (or make a specific one for every sfx-ed bump-mapped material), a texture stage, or just make a second pass… I guess I’ll have discuss this with the 3D-artist (he’d be the one making the sacrifice actually ;D)

Thanks for all suggestions… if you have new ones, keep’em coming

I put a webpage up with some snapshots at http://www.unknownproductions.org/snaps

First one is just some rusty metal + bump test.

The cube ones are the ones that show what i want to achieve:

  • First one is just a bump map itself (not very high resolution and very compressed JPG for textures, don’t mind it)

  • Second is texture + bump map, colorized… this has weird material properties (red ambient + green emission), but the DOT3 stage chops them up

  • Third is the texture by itself (just to see the difference with the bump map)

  • Fourth includes a reflection map and uses 3 texture stages… once again, material properties get chopped by the DOT3 stage

  • Fifth shows how the same properties look if i take out the bump map (and use texture + reflecion only). It works because the diffuse texture stage gets the material color.

I haven’t included any snap with a lightmap coz i’m too lazy to get one.

Originally posted by JustHanging:
I’m not sure you can make it fit two texture stages, though, you might need a third stage to modulate with the material color.

You can:
stage0: dot( norm_map, light_vector)
stage1: previous * diffuse_texture
blending: const_color, zero

You just have to put your material color in the blend const color.

that only allows for diffuse color… i’m talking about ambient, diffuse, specular, emission and shininess properties

changing the color texture is not hard at all, but that won’t allow for lighting-properties for a material

… unless you calculate the color on the CPU, but who wants to use the CPU, right? ;D

Ok this is how I would do it in your circumstances (assuming normal map is in object space):

Enable seperate specular.

stage0: dot( norm_map, light_vector)
stage1: previous * diffuse_texture
stage2: previous * primary

This will allow all per-vertex material properties to be used. (assuming you have 3 texture stages)

As a programmer, you can’t combine a fragment program with normal texturing.

As an artist (yes, I’ve worked as such), I’m not very interested in combining material color with a diffuse texture. It’s either or, if I want a constant color, I can use the material color, otherwise I’ll use a texture. In practice I’d never use the material color, since no material is that flat anyways, and I want to encode whatever lighting the engine doesn’t support (ambient occlusion for example) into the texture.

There’s a small possibility to save some memory by re-using same textures with different colors, but it’s very limiting too. To simplify, if you use a red color, you can’t have any blue pixels on the surface at all. I’d rather create every material explicitly, making sure it looks as good as possible. The idea of generic textures usually leads to artificial results.

But of course you should listen to your own artists, not me.

-Ilkka

[This message has been edited by JustHanging (edited 01-29-2004).]

JustHanging, damn, you are so right… I just can’t think. I feel stupid now! (about the “as a programmer” thing)

About the artist thing, except for the diffuse color, I don’t agree with you (I’ve done some 3D too)… I do agree that setting up static material properties for a static scene might be stupid, but the nice nice thing about material properties is that they are pretty dynamic, and you can make some really nifty effects with them while doing realtime animations… I’ve done it in the past.

Then again, I’m a programmer with a “coder-color-syndrome”, so yes, I guess that should be the artist who should put in or not features.

Anyway, you are more than right with using fragment programs + texturing, so this kinda makes my original question stupid if I’m going to use fragment programs for a start. Now I just need someone to donate some ATI to use the equivalent to register combiners… it’s been a while since I researched on both caps… is there already an extension to use something like register combiners (or pixel shaders 1.1) which is non-vendor specific?

Thank you very much

Originally posted by Jcl:
[b]that only allows for diffuse color… i’m talking about ambient, diffuse, specular, emission and shininess properties

changing the color texture is not hard at all, but that won’t allow for lighting-properties for a material[/b]

Why would you mix up OGL standard lighting equation with the normal mapping? Either you use OGL lighting system alone, or the lighting system you code with texture units, but not both at the same time!

Why would you compute goureau lighting (PRIMARY_COLOR), and then use its result to compute phong lighting?? Totally incorrect for me, until I did not get what you mean.

What you might want to do is:

  • compute the diffuse color in one pass
  • then compute the specular color in another pass and add it.

tfpsly: to put it simple, I’m referring to glMaterial() [ambient, diffuse, specular, shininess, emission] vs glColor [diffuse&ambient maximum, typically just diffuse].

glMaterial parameters are a -very- easy way to tweak your material parameters yet still having the same lighting… that’s however, just gouraud, as you well said, so you might (or might not) want to add a normal/bump map or lightmap to have per-pixel lighting, but that doesn’t react to lights in the same way glMaterial does. So you first change the surface properties, then apply a modulated texture onto it, so you let all lighting calculations to the GPU (at least, for the gouraud part).

I’m sorry if my english is too chunky… sometimes I just can’t express clearly what I’d like to say, and just use more words to try to describe, which might lead to a mess :slight_smile:

Ok, got it: you modulate the gouraud lighting wit the normal mapping… which does not give physically exact results(*), but which might still looks good (and which includes diffuse + specular + …).

(*) Aren’t you also “bumping” the ambient lighting for example ? =)

Note that glColor is unused when lighting is enabled. The way to get colors into your lit vertex color is to enable COLOR_MATERIAL.

FWIW, our artists use vertex colors extensively, to get subtle variation in shading, heavier tones towards the ground while using tiling textures, etc. It’s a useful thing.

ATI has a texture shader extension that is supported from Radeon 8500 and up. It’s about equivalent to ps_1_4 under DirectX.

tfplsy: yes, that’s exactly what I want to achieve (and yes, i’m bumping all lights… if the surface is bumpy, all lighting applied to it should be bumped also, I believe that’s correct). And yes, it’s not realist, but I’m not aiming for realism, I’m aiming for flexibility (if the artist wants to keep MAterial colors white, and apply coloured textures, he’s free to do it that way).

jwatte: yes, I was referring to that when making the glMaterial vs glColor comparison (COLOR_MATERIAL enabled, and glColorMaterial set to DIFFUSE, or DIFFUSE_AND_AMBIENT, that’s the most you can reach).

I want to achieve exactly what tfpsly said, but in a flexible way so I can run it in a variety of cards… I guess the only way is using either fragment programs (vastly limiting the number of cards), or using NV Register Combiners and the equivalent for ATI, a bad choice for me since I don’t have any ATI card to develop on, so it’d be next-to-impossible to develop the codepath to it without watching the results…

I guess I’ll have to save and buy an ATI card or something.

To be sincere, that’s the part I’ve hated the most about OpenGL always (vs DirectX, for example), and although it’s getting better with modern cards, it always comes to kick me back in my ass :stuck_out_tongue:

Originally posted by Jcl:
I want to achieve exactly what tfpsly said, but in a flexible way so I can run it in a variety of cards… I guess the only way is using either fragment programs (vastly limiting the number of cards), or using NV Register Combiners and the equivalent for ATI
[…]
To be sincere, that’s the part I’ve hated the most about OpenGL always (vs DirectX, for example), and although it’s getting better with modern cards, it always comes to kick me back in my ass :stuck_out_tongue:

Well you can still make that using Cg, which is a high level shading language that gets compiled in either Nv’s RC, Ati’s equivalent, or Fp or… depending on the best profile the hardware is able to use. Best way to program using pixel shader in my opinion: support for everything >= GF3 and Radeon 7500:

Support for 14 profiles:

* vs_1_1 for DirectX 8 and DirectX 9
* vs_2_0 and vs_2_x for DirectX 9
* ps_1_1, ps_1_2 and ps_1_3 for DirectX 8 and DirectX 9
* ps_2_0 and ps_2_x for DirectX 9
* arbvp1 [OpenGL ARB_vertex_program]
* arbfp1 [OpenGL ARB_fragment_program]
* vp20, vp30 [NV_Vertex_program 1.0 and NV_Vertex_program 2.0]
* fp30 [NV30 OpenGL fragment programs]
* fp20 [NV_register_combiners and NV_Texture_shader)

Yes, actually, I started downloading the Cg toolkit (and it cut off at 200Mb, so I’m redownloading again) shortly after writing that post… So which ATI cards support arbfp1? anything over the 7500?

Last time I checked Cg it only did DirectX pixel shaders… I used “nvparse” for converting to Register Combiners but that was pretty much limited and buggy, and I’d rather use NVRC’s directly… I’ll recheck again now, seems it has evolved a lot.

Thank you very much