need ARB normalmap example

I am writing something using normal map in tangent space, but i am not clear about how to tranform light vector to tangent space(or transform normal from tangent space to eye space?), and how to pass those data to do the transformation in vp&fp. I wish I can find a normalmap example with source using ARB_vertex_program and ARB_fragment_program, anyone can help me?

no luck? I’ve been looking around at some web site, but can only found object space normal map demo, which can not solve my problem… :frowning:

http://cvs1.nvidia.com
http://www.ati.com/developer
http://www.booyah.com/article06-dx9.html

Last but not least:
http://www.google.com/search?hl=en&q=tangent+space+bump+mapping&spell=1

If you can’t find anything in those links then I’m afraid you’re SOL. :smiley:

-SirKnight

btw i have a question that might be a little off topic here… but still why do so many do the normal mapping in tangent space? i did a little game for a course at university and implemented it there in object space. It really ran wonderfully without all the problems and additional overhead of tangent space computations (eg generation and passing of binormals, transforming into an additional koordinate system,…)
Is there any advantage to tangent space normal mapping that im simply not aware of and that merits all theses problems?

You can’t re-use normal maps in other parts of your scene.

The reaons why tangent space is preferred is a) you can’t reuse texture coordiniates for symmeteric parts of the model in object space and b) you can only use detail normal maps for fine close-up detail in additon to the main normal map in tangent space. The part b is the biggie there.

Oh and I should put a part c. c) normal maps would be tied to a specific object and couldn’t be applied to other objects.

-SirKnight

i generated my normal maps from high polygonal models and applied them to the simple models i render… thus i dont see the need to use them in other places since they wont fit anyway onto another object :slight_smile:
hmm ok if used for things like walls i can see the advantage of using tangent space (still for skinned characters im quite sure that using object space has quite some nice advantages).

[edit]
ah just saw the other answer… didnt think of using a second finer normal map to add further detail
[/edit]

Using object space normal maps might work ok in a situation like that (except for the detail map thing), but it’s not a good general solution for bump mapping a whole scene. That is only good for a simple demo to wow your friends. :slight_smile: I’m not a big fan of having to use a different technique to light every different type of object.

EDITED

-SirKnight

And whenever possible I like to just post code. Here’s a vertex and fragment program that do tangent space bump mapping with 2 point lights…

!!ARBvp1.0

PARAM mvi[4] = {state.matrix.modelview.inverse};
PARAM mvp[4] = {state.matrix.mvp};

ATTRIB tangent = vertex.texcoord[1];
ATTRIB binormal = vertex.texcoord[2];
ATTRIB normal = vertex.normal;

TEMP light0pos, light0vec;
TEMP light1pos, light1vec;

vector pointing to light0 for bump mapping

DP4 light0pos.x, mvi[0], state.light[0].position;
DP4 light0pos.y, mvi[1], state.light[0].position;
DP4 light0pos.z, mvi[2], state.light[0].position;
SUB light0vec, light0pos, vertex.position;

transform light0 vector into tangent space (DO NOT NORMALIZE)

DP3 result.texcoord[1].x, light0vec, tangent;
DP3 result.texcoord[1].y, light0vec, binormal;
DP3 result.texcoord[1].z, light0vec, normal;
MOV result.texcoord[1].w, 1.0;

vector pointing to light1 for bump mapping

DP4 light1pos.x, mvi[0], state.light[1].position;
DP4 light1pos.y, mvi[1], state.light[1].position;
DP4 light1pos.z, mvi[2], state.light[1].position;
SUB light1vec, light1pos, vertex.position;

transform light1 vector into tangent space (DO NOT NORMALIZE)

DP3 result.texcoord[2].x, light1vec, tangent;
DP3 result.texcoord[2].y, light1vec, binormal;
DP3 result.texcoord[2].z, light1vec, normal;
MOV result.texcoord[2].w, 1.0;

regular output

DP4 result.position.x, mvp[0], vertex.position;
DP4 result.position.y, mvp[1], vertex.position;
DP4 result.position.z, mvp[2], vertex.position;
DP4 result.position.w, mvp[3], vertex.position;
MOV result.color, vertex.color;
MOV result.texcoord[0], vertex.texcoord[0];

END

!!ARBfp1.0

PARAM light0color = state.light[0].diffuse;
PARAM light1color = state.light[1].diffuse;
PARAM ambient = state.lightmodel.ambient;

TEMP rgb, normal, temp, bump, total;
TEMP light0tsvec, light1tsvec;

get texture data

TEX rgb, fragment.texcoord[0], texture[0], 2D;
TEX normal, fragment.texcoord[0], texture[1], 2D;

remove scale and bias from the normal map

MAD normal, normal, 2.0, -1.0;

normalize the light0 vector

DP3 temp, fragment.texcoord[1], fragment.texcoord[1];
RSQ temp, temp.x;
MUL light0tsvec, fragment.texcoord[1], temp;

normal dot lightdir

DP3 bump, normal, light0tsvec;

add light0 color

MUL_SAT total, bump, light0color;

normalize the light1 vector

DP3 temp, fragment.texcoord[2], fragment.texcoord[2];
RSQ temp, temp.x;
MUL light1tsvec, fragment.texcoord[2], temp;

normal dot lightdir

DP3 bump, normal, light1tsvec;

add light1 color

MUL_SAT bump, bump, light1color;
ADD_SAT total, total, bump;

add ambient lighting

ADD_SAT total, total, ambient;

multiply by regular texture map color

MUL_SAT result.color, rgb, total;

END

i have a question about this part of the vertex program:

vector pointing to light0 for bump mapping

DP4 light0pos.x, mvi[0], state.light[0].position;
DP4 light0pos.y, mvi[1], state.light[0].position;
DP4 light0pos.z, mvi[2], state.light[0].position;
SUB light0vec, light0pos, vertex.position;

transform light0 vector into tangent space (DO NOT NORMALIZE)

DP3 result.texcoord[1].x, light0vec, tangent;
DP3 result.texcoord[1].y, light0vec, binormal;
DP3 result.texcoord[1].z, light0vec, normal;
MOV result.texcoord[1].w, 1.0;

the first section seems to transform the light position from eye space to object space(using an inverse modelview matrix), that’t just like what i am thinking about. But in the second section, transforming the light vector by a matrix with tangent, binormal and normal like this, I think this matrix is used to transform a surface normal to object space(tangent->obj), when we want to transform from obj space to tangent space, we need a tranposed matrix. Am I right?
In that case, I think i need to send three vector of transposed tangent, binormal and normal, can not use the vertex normal directly that way.

Mogumbo:

How do you pass this info in the vertex shader?

ATTRIB tangent = vertex.texcoord[1];
ATTRIB binormal = vertex.texcoord[2];
ATTRIB normal = vertex.normal;

vertex.normal has the values that you put in the glVertex3f()?

what about vertex.texcoord[1] and vertex.texcoord[2]?
and how you setup the glMultiTexCoord3f() for the two texture units to use this shaders?
thank you.

Nil_z, you are correct. You need to use the transpose of the tangent space matrix, which is actually what happens here. If you think about how matrix multiplies work, to calculate each element in a matrix you find the dot product of a row in one matrix and a column in another. In this case, my columns (the tangent, binormal, and normal) form the transpose of the tangent space matrix.

You could probably spend a lifetime writing comments in these programs, huh? :slight_smile:

nobill, you answered your own first question. You
use glMultiTexCoord3f or glVertexAttrib3f. The vertex program spec has a chart (TableX.1) that shows which generic vertex attributes map to which texture coordinates.

vertex.normal is passed with glNormal but you could also pass it in as a texture coordinate to make things consistant.

I don’t understand the last question about how you setup glMultiTexCoord3f(). You do need to create tangents and binormals at each vertex in addition to normals. That’s probably the most annoying part of implementing dot3 bump mapping.

ah, I get it. I am used to each vector in a matrix is a column, like the common mat[0], mat[1], mat[2], mat[3] in a mat[4][4]. I forget in vp the vector is the row.
thanks, mogumbo :smiley:

Mogumbo:
“ATTRIB binormal = vertex.texcoord[2];”

vertex.texcoord[2]; this is the third texture unit right? but we have two textures… the decal texture at the texture unit 0 and the normal map at unit 1.

using an other shader a was doing this:

glMultiTexCoord2fARB(GL_TEXTURE0_ARB, texVerts[i].x,texVerts[i].y);
v = lPos - vertices[i];
glMultiTexCoord3fARB(GL_TEXTURE1_ARB, v % binormal, v % tangent, v % normal);
glVertex3f(vertices[i].x, vertices[i].y, vertices[i].z);

(% is an operator for dot product. And v is a vector)

but I am confuzed about how i will change this to use your shader.
sorry i am new in this…
thanks

does temporary variable number and reusage affect speed of vp&fp? for example, use only one TEMP variable for serval times or use serval TEMP variables for each calculation, will there be a speed difference?

I’ve written my simple normalmap test, but there is a problem, the bump on the y axis seems to be reversed. I am using Eric Lengyel’s code in this link:
http://www.opengl.org/discussion_boards/ubb/Forum3/HTML/011349.html
to generate tangent space transformation.
the normal map looks like this:
http://gba.ouroad.org/misc_download/2004/10/normal.JPG
it is generated by the NVIDIA photoshop normalmap plugin from this height map:
http://gba.ouroad.org/misc_download/2004/10/heightmap.JPG
and here is the screenshot:
http://gba.ouroad.org/misc_download/2004/10/screenshot.JPG
the light is positioned on Z+, so the light direction should be (0, 0, 1). Notice the bump on the y direction is not correct.

if I don’t do this line in the code:
tangent[a].w = (n % t * tan2[a] < 0.0F) ? -1.0F : 1.0F;
and use the tangent’s w as 1.0f, the result seems fine. I am a little confused here.

oops, i’ve found what’s wrong. because i ignored the image origin bits in the TGA head so the image i read into memory is upside down. sorry for my last post, and Eric’s code is correct.

Nil_z, correct again, I think. I believe OpenGL matrices are said to be column order matrices, but I still think of them as row order for some reason. I guess that’s how I learned matrices originally, but it’s all sort of arbitrary anyway.

nobill, that’s kinda neat. It looks like you are passing the light vector in tangent space, which I have never tried before. But (if I’m reading your code correctly) you will have to create new vertex data every time you move the light. The vp/fp I posted above allows you to move the light around without changing the vertex data; the vertex data contains a complete normal, binormal, and tangent.