PDA

View Full Version : Implementing DOT3 bump mapping



Asshen
02-14-2004, 03:26 AM
Can anybody help me with DOT3 bumpmapping ?
See the below code, I tried to use an example from someone else (texture layer 4), but all I get is black and white http://www.opengl.org/discussion_boards/ubb/smile.gif

Doesn't a normal map need a lightsource ?
Or does the DOT3 parameter know where the light is ?

I'm just starting with all this combiner stuff, so a little explanation could come handy too http://www.opengl.org/discussion_boards/ubb/smile.gif

Thank you.

The code:



void SetupTextures()
{
//color array, this one does the magic
glEnableClientState(GL_COLOR_ARRAY);
glColorPointer(4, GL_FLOAT, 0, TerrainCA);
/*
//tex5 - BUMP
glClientActiveTextureARB(GL_TEXTURE4_ARB);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glTexCoordPointer(2, GL_FLOAT, 0, TerrainTAL3);
*/
//tex4
glClientActiveTextureARB(GL_TEXTURE3_ARB);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glTexCoordPointer(2, GL_FLOAT, 0, TerrainTAL3);

//tex3
glClientActiveTextureARB(GL_TEXTURE2_ARB);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glTexCoordPointer(2, GL_FLOAT, 0, TerrainTAL2);

//tex2
glClientActiveTextureARB(GL_TEXTURE1_ARB);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glTexCoordPointer(2, GL_FLOAT, 0, TerrainTAL1);

//tex1
glClientActiveTextureARB(GL_TEXTURE0_ARB);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glTexCoordPointer(2, GL_FLOAT, 0, TerrainTAL0);

//verts
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, TerrainVA);
/*
//tex5 - bump
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE_ARB);
glTexEnvi(GL_TEXTURE_ENV, GL_SOURCE0_RGB_ARB, GL_TEXTURE);
glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB_ARB, GL_REPLACE);

glActiveTextureARB(GL_TEXTURE4_ARB);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, Textures[3]);

glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE_ARB);
glTexEnvi(GL_TEXTURE_ENV, GL_SOURCE0_RGB_ARB, GL_TEXTURE);
glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB_ARB, GL_DOT3_RGB_ARB);
glTexEnvi(GL_TEXTURE_ENV, GL_SOURCE1_RGB_ARB, GL_PREVIOUS_ARB);
*/
//tex4 - detail
glActiveTextureARB(GL_TEXTURE3_ARB);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, Textures[3]);

glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB_ARB, GL_MODULATE);

//tex3
glActiveTextureARB(GL_TEXTURE2_ARB);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, Textures[1]);

//Tell OpenGL to combine the textures using interpolation
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE_ARB);
glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB_ARB, GL_INTERPOLATE_ARB);

glTexEnvi(GL_TEXTURE_ENV, GL_SOURCE0_RGB_ARB, GL_PREVIOUS_ARB);
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND0_RGB_ARB, GL_SRC_COLOR);

glTexEnvi(GL_TEXTURE_ENV, GL_SOURCE1_RGB_ARB, GL_TEXTURE);
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND1_RGB_ARB, GL_SRC_COLOR);

glTexEnvi(GL_TEXTURE_ENV, GL_SOURCE2_RGB_ARB, GL_PRIMARY_COLOR_ARB);
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND2_RGB_ARB, GL_SRC_COLOR);//ALPHA);

//tex2
glActiveTextureARB(GL_TEXTURE1_ARB);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, Textures[2]);

//Tell OpenGL to combine the textures using interpolation
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE_ARB);
glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB_ARB, GL_INTERPOLATE_ARB);

glTexEnvi(GL_TEXTURE_ENV, GL_SOURCE0_RGB_ARB, GL_PREVIOUS_ARB);
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND0_RGB_ARB, GL_SRC_COLOR);

glTexEnvi(GL_TEXTURE_ENV, GL_SOURCE1_RGB_ARB, GL_TEXTURE);
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND1_RGB_ARB, GL_SRC_COLOR);

glTexEnvi(GL_TEXTURE_ENV, GL_SOURCE2_RGB_ARB, GL_PRIMARY_COLOR_ARB);
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND2_RGB_ARB, GL_SRC_ALPHA);//COLOR);//ALPHA);

//tex1
glActiveTextureARB(GL_TEXTURE0_ARB);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, Textures[0]);

glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
}

I then render using:



glDrawElements(GL_TRIANGLES, ElementCount, GL_UNSIGNED_INT, indexes);

Asshen
02-15-2004, 02:09 AM
Nobody ?
I'm disappointed. There are much more difficult topics here that get answered right away...

K.

Ysaneya
02-15-2004, 03:16 AM
You didn't get any response because:
1. This topic has been covered to death multiple times on this very same board.
2. There are lots of documentation or demos on the web, check out ATI's or NVidia's developper sites.
3. It's the week end, most developpers are away til monday. Bad idea to expect a quick response the week end.

Now, moving onto your code: seems like you're missing some important points about dot3 bump mapping. Your code is doing the dot3 operation between the normal map and the detail texture, so it's no surprise it doesn't work.

Yes, you need a light to have dot3 bumpmapping working. This light must be transformed in tangent space (although object space is also possible). Generally it's done in that way:

1. Compute the tangent space at load time or in preprocessing. The tangent space is formed of 3 vectors: the normal at the vertex; the tangent at the vertex; and the binormal (cross product between the last 2). The tangent space is computed from texture coordinates, so you can only bumpmap textured meshes. There was a thread about computing the tangent space on this board a few weeks ago, it contained working code, check it out if you want it.

2. In real time (and preferably in a vertex shader, but you can also do it on the CPU), for each vertex, you calculate the vertex-to-light vector, normalize it, and transform it by the tangent space matrix formed by the 3 vectors calculate in 1.

3. You bind a normalizing cube map onto a texture unit, say, TMU #0. You feed in the tangent-space transformed vertex-to-light vector into 3D texture coordinates for this TMU. When doing the texture lookup, the cubemap will output the same vector, but normalized. This is to take care of the loss of normalization due to linear interpolation of texture coordinates between 2 vertices. This step is optionnal, you can also do the renormalization in a pixel shader, or just not do it, with better or worse quality.

4. You sample your normal map on TMU #1 with standard 2D texture coordinates (be carefull of the scale/bias needed for signed vectors in the normal map), and you perform a dot3 operation between TMU #0 and TMU #1. This gives you a shading intensity, which you can then multiply with the light color, the diffuse texture, etc.. etc...

So as you see, your code is missing quite everything.

Y.


[This message has been edited by Ysaneya (edited 02-15-2004).]

Asshen
02-15-2004, 04:19 AM
Oh. didn't think of the weekend... I mainly code in weekend as hobby, didn't know many actually do this as a job.

As I said, I'm just starting with this combiner stuff, so it's obvious I will make mistakes before I'll get there, that's why I ask for help http://www.opengl.org/discussion_boards/ubb/smile.gif

Thanks for your explanation.
K.

JanHH
02-15-2004, 07:07 PM
read this:
http://www.paulsprojects.net/tutorials/simplebump/simplebump.html

this is the most detailed, beginner friendly explanation of how to do it on the whole web http://www.opengl.org/discussion_boards/ubb/wink.gif.

although its oudated, it does not do specular highlights and needs two passes for diffuse lighting which more modern hardware (the critical part is the number of texture units) can do in one with ARB_texture_env_combine.

But it's not a very good idea to use these combiner extensions anyway as they are

- not very powerful
- very complicated

compared to vertex programs/fragment programs.

the tutorial from paulsproject is mainly useful to understand how it works in theory, and for getting the normalization cube map code http://www.opengl.org/discussion_boards/ubb/wink.gif.

I first thought that vertex/fragment programs are scary but the opposite is the case, they are very easy to use and even very easy to debug. So If you have a grapics card that supports this, I would strongly recommend to start with this.

Jan

Asshen
02-18-2004, 12:27 AM
Thank you.

I'll check out the theory, and start learning shaders afterwards http://www.opengl.org/discussion_boards/ubb/smile.gif

Greetz.
K.

jorge1774
03-30-2004, 11:44 PM
Hi all.

I recommend you to use combiners + vertex programs, and NEVER fragment programs. With fragment programs, on a standard GFX 5700, you can cut your rendering speed by a quarter, or more...

See you.

KuriousOrange
04-01-2004, 03:02 AM
I must agree. You should implement a register combiner path as it's the most powerful way of manipulating fragments on nvidia hardware up to the GeForce3 (therefore the majority of users), whereas fragment programs simply won't work, and ARB_texture_env_combine simply doesn't offer anywhere near as much power.