yet another bumpmap question :)

which is better, using the nvidia plugin that converts bumpmap(greyscale) maps to a normal map, or using a function to convert?

I hope this makes sense… :0

also, i was wondering if someone could explain the process behind converting a bumpmap to normal map… The way i understand it at the moment is that it converts the change in height when moving along both the s & t axis… is this right?

Thanks.

[This message has been edited by robert (edited 03-09-2002).]

ok. the idea:
your grayscale bumpmap defines some heightmap. this heightmap (guess all of you know landscape-engines ) has normals on it, the surface normals… the converter takes the heightmap and generates for each point on the map the normal at this point.

now:
if you use your own function then
+) you can load directly the grayscale-maps (less memory for the texture)
+) you can even GENERATE the heightmaps in realtime and convert them to the normalmap
+) you can code what ever converter you want
-) normally the code is the most simple approach to generate the normal, wich is not that accurate
-) the tool from nvidia provides the possibly to generate the normal by sampling the heights all around the current point on the texture, result can be sharper and softener bumpmaps, and the quality is bether than the simple way normally normalmaps are generated…

its up to you.

btw: do you know some calculus? what the whole thing does is finite differencing for getting the normal of a 2d-function of height (our bumpmap )

hope it helped a little

Helped a lot thanks [ ]

I have done basic calculus. So i would struggle a little to do it… do you know anywhere i can find code to do this, so i can learn from it?

after doing research into bump mapping… i have to say that one of the most confusing/hard/annoying things is the number of different ways this technique can be achieved. (dot3, normal bumpmapping, bumpmapping on different classes of hardware). [ ]… soooo many to choose from

in fact, the way bumpmapping is done on all the todays gpu’s is mathematically always the same.
just the implementation in the specific hardware is the problem… cause a full phong-lighting-equation is not yet possible to pull perfectly into hardware perpixel

soo…
depending of the power you have to find a more or less precious perpixellighting equation wich looks as good as you can get on the current gpu… and THATS a problem other is that the extensions are not the same on all gpu’s so you have to learn how to set up your equation on the current hardware…

but:
the equation is always this:

color = ambient + diffuse + specular; //you can even create other ones if you think you need, but this is wellknown and good looking for most of the stuff

now… ambient is simply the ambient color… a constant color for where its NOT enlightened how it should look then…

diffuse is:
max(
normalized(surfacenormal) dot normalized(point_to_light),
0
)
means the dotproduct of your surface normal and the normalized point to light vector IF this dotproduct is bigger than 0, else its 0

i don’t go into specular for now cause its more complicated in terms of nice fast fakes or full specular, and first you have enough troubles on implementing diffuse bumpmapping

your bumpmap, wich was a heightmap at first you converted to a normalmap, means a map of surfacenormals. THESE then have to be dotted (dot product) with the normalized point_to_light vector… THAT is bumpmapping as we know it…

here: geforce1/2 perpixellighting at its best (in one pass)…
http://tyrannen.starcraft3d.net/PerPixelLighting

needed quite a bit of work to get the full equation faked on gf2, but it looks quite cool imho

Originally posted by robert:
So i would struggle a little to do it… do you know anywhere i can find code to do this, so i can learn from it?

On my site you can download for instance my refraction demo, look into the texture.h file after a function called toNormalMap().

Basically, it uses the sobel operator on the closest 3x3 block which then returns edges and their orientation, sort of, which is then used to create the vector.

a full phong-lighting-equation is not yet possible to pull perfectly into hardware perpixel
imho

Isn’t GeForce3/4 doing that?
With the texture shader sequence:
GL_TEXTURE_2D
GL_DOT_PRODUCT_NV
GL_DOT_PRODUCT_DIFFUSE_CUBE_MAP_NV
GL_DOT_PRODUCT_CONST_EYE_REFLECT_CUBE_MAP_NV
as described by Mark Kilgard in the NV_texture_shader extension (under the Issues section titled "Does this extension support so-called “bump environment mapping”? "

this one can NOT and NEVER have perpixel variable specular exponent…
the only gpu capable of this is the ati radeon8500… in fact, there you can implement one perpixelphonglightened lightsource perpass… at least i think you can… ask humus for ati programming
but on nvidia gpu’s its not yet possible without render to texture and multipass…

Thanks for all your replies.

davepermen, i checked your demo out and it looks really nice!

Humus, i really wanted to test your demo, but i only have a gf2

I have another question: I am trying to follow one of the demos from the sdk that uses glColor to store the tangent(texture)-space light vector and glsecondarycolor to store the half-vector… my question(s) are:

  1. does the half angle vector need to be in tangent-space aswell?

  2. Is this the correct step for finding the tangent-space light vector?:

    1. multiply this by the inverse modelview matrix to get light in object-space
    2. Get light-to-vertex Vector by subracting the light vector from the vertex
    3. Normalize this result.
    4. multiply by .5 and add .5
    5. stick into glColor3f

please excuse me if i am completely wrong

  1. does the half angle vector need to be in tangent-space aswell?

I´m pretty sure that light vector AND half angle vector have to be in tangent space.

  1. Is this the correct step for finding the tangent-space light vector?:
  2. multiply this (= light position in eye space) by the inverse modelview matrix to get light in object-space
  3. Get light-to-vertex Vector by subracting the light vector from the vertex
  4. Normalize this result.
  5. multiply by .5 and add .5
  6. stick into glColor3f

Your step 2 should look like this:
Get light vector in Object Space, by subtracting the vertex position from the light position in Object Space (Can someone tell me, why I saw this to be vertex pos - light pos somewhere? … that didn´t work for me).

You have to convert the light vector, which is in object space, into tangent space after step 3 (you didn´t mention a conversion into tangent space).
Multiply the light vector with the tangent matrix, in order to get the light vector in tangent space.
After that scale and bias the resulting vector (your 4th step) and put it into the primary color (5th step).

Well, that´s the way I do it (and it works), but I would call me per-pixel lighting beginner .

Diapolo

You have to convert the light vector, which is in object space, into tangent space after step 3 (you didn´t mention a conversion into tangent space).

whoops i missed the most important part

thanks for your help

[This message has been edited by robert (edited 03-11-2002).]

I have another problem

imagine a bumpmapped quad… i am only using primary color at the moment, so no specular with the secondry (which could be the problem). Now, when my light goes to the left hand side, the quad darkens, when it goes to the right, the quad lightens… surely this isn’t right?

btw, my quad is made of two triangles… it was left over from some other bumpmapping algorithmn that i was trying.

Could you perhaps give a short code sample or some form of pseudo code, what you are currently doing.

How do you calculate your tangent matrix?
Is the math you are doing correct?
Which Extensions are you using?

The error lies somewhere in your Bump Mapping code I guess .

Diapolo

ok, here ya go =):
Please excuse the mess and the use of unecassary variables… once i have it working i will tidy it up and make it a bit nicer

float Normal[] = {0,1,0,
0,1,0,
0,1,0,
0,1,0,
0,1,0,
0,1,0};

float Vertex[] = { -20,0,20,
20,0,20,
-20,0,-20,
20,0,20,
20,0,-20,
-20,0,-20};

float Binormal[] = {0,0,-1,
0,0,-1,
0,0,-1,
0,0,-1,
0,0,-1,
0,0,-1};

float Tangent[] = { 1,0,0,
1,0,0,
1,0,0,
1,0,0,
1,0,0,
1,0,0};

those are the quad values, like i said before they are two triangles

This is the code to get the light and eye vector and mult them into Object space:

light = (lightPosition.x, lightPosition.y, lightPosition.z);
eye = (TheCam.pos.x,TheCam.pos.y,TheCam.pos.z);

// convert light to object space by multiplying by inverse modelview matrix
li[0]=(light[0]*temp[0])+(light[1]*temp[4])+(light[2]*temp[8]);
li[1]=(light[0]*temp[1])+(light[1]*temp[5])+(light[2]*temp[9]);
li[2]=(light[0]*temp[2])+(light[1]*temp[6])+(light[2]*temp[10]);

// convert eye to object space by multiplying by inverse modelview matrix
e[0]=(eye[0]*temp[0])+(eye[1]*temp[4])+(eye[2]*temp[8]);
e[1]=(eye[0]*temp[1])+(eye[1]*temp[5])+(eye[2]*temp[9]);
e[2]=(eye[0]*temp[2])+(eye[1]*temp[6])+(eye[2]*temp[10]);

This is the start of the loop to draw the poly’s

glBegin(GL_TRIANGLES);
	for (int blah = 0; blah < 6; blah++)
{
	// Get light vector from vertex and normalize
	l[0]=(li[0] - Vertex[(blah * 3)+ 0]);
	l[1]=(li[1] - Vertex[(blah * 3)+ 1]);
	l[2]=(li[2] - Vertex[(blah * 3)+ 2]);


	eye[0]=(e[0] - Vertex[(blah * 3)+ 0]);
	eye[1]=(e[1] - Vertex[(blah * 3)+ 1]);
	eye[2]=(e[2] - Vertex[(blah * 3)+ 2]);

eyeT = l + eye; // h vector

	ma[0] = Tangent[(blah * 3)+ 0];
	ma[1] = Binormal[(blah * 3)+ 0];
	ma[2] = Normal[(blah * 3)+ 0];
	ma[3] = Tangent[(blah * 3)+ 1];
	ma[4] = Binormal[(blah * 3)+ 1];
	ma[5] = Normal[(blah * 3)+ 1];
	ma[6] = Tangent[(blah * 3)+ 2];
	ma[7] = Binormal[(blah * 3)+ 2];
	ma[8] = Normal[(blah * 3)+ 2];

		//light x component times tangent x comp plus light y comp times tangent y comp....
		lightT[0]=(l[0]*ma[0])+(l[1]*ma[3])+(l[2]*ma[6]);
		// light x comp times binormal x comp plus light y comp times binormal y comp...
		lightT[1]=(l[0]*ma[1])+(l[1]*ma[4])+(l[2]*ma[7]);
		// light x comp times normal x comp plus light y comp times normal y comp...
		lightT[2]=(l[0]*ma[2])+(l[1]*ma[5])+(l[2]*ma[8]);

		lightT.normalize();


		
		//light x component times tangent x comp plus light y comp times tangent y comp....
		h[0]=(eyeT[0]*ma[0])+(eye[1]*ma[3])+(eye[2]*ma[6]);
		// light x comp times binormal x comp plus light y comp times binormal y comp...
		h[1]=(eyeT[0]*ma[1])+(eye[1]*ma[4])+(eye[2]*ma[7]);
		// light x comp times normal x comp plus light y comp times normal y comp...
		h[2]=(eyeT[0]*ma[2])+(eye[1]*ma[5])+(eye[2]*ma[8]);

h.normalize();

			h *= .5f;
			h += .5f;

			lightT *= .5;
			lightT += .5;

			glMultiTexCoord2fARB(GL_TEXTURE0_ARB, s , t );
			glMultiTexCoord2fARB(GL_TEXTURE1_ARB, s , t );
			
			glColor3fv(&lightT[0]);  // put the light vector into the primary color
			glSecondaryColor3fvEXT(&h[0]); // put the h vector into the secondary color
	//		glNormal3f(Normal[(blah * 3)+ 0],Normal[(blah * 3)+ 1],Normal[(blah * 3)+ 2]);
			glVertex3f(Vertex[(blah * 3)+ 0],Vertex[(blah * 3)+ 1],Vertex[(blah * 3)+ 2]);
			

}
		glEnd();

Done!

[This message has been edited by robert (edited 03-11-2002).]

oops i forgot this

TEXTUREMAN.ExecuteNVParse(
"!!RC1.0
"
"const0 = (.5, .5, .5, 1);
"
"{
"
" rgb {
"
" spare0 = expand(col0) . expand(tex1);
"
" spare1 = expand(col1) . expand(tex1);
"
" }
"
"}
"
"{
"
" rgb {
"
" spare1 = unsigned(spare1) *
"
" unsigned(spare1);
"
" }
"
"}
"
"{
"
" rgb {
"
" spare1 = spare1 * spare1;
"
" }
"
"}
"
"final_product = const0 * spare1;
"
"out.rgb = spare0 * tex0 + final_product;
"
"out.a = unsigned_invert(zero);
"
);

[This message has been edited by robert (edited 03-11-2002).]

// convert light to object space by multiplying by inverse modelview matrix
li[0]=(light[0]*temp[0])+(light[1]*temp[4])+(light[2]*temp[8]);
li[1]=(light[0]*temp[1])+(light[1]*temp[5])+(light[2]*temp[9]);
li[2]=(light[0]*temp[2])+(light[1]*temp[6])+(light[2]*temp[10]);

// convert eye to object space by multiplying by inverse modelview matrix
e[0]=(eye[0]*temp[0])+(eye[1]*temp[4])+(eye[2]*temp[8]);
e[1]=(eye[0]*temp[1])+(eye[1]*temp[5])+(eye[2]*temp[9]);
e[2]=(eye[0]*temp[2])+(eye[1]*temp[6])+(eye[2]*temp[10]);

You have to do a complete matrix multiply, not only a multiply with the 3x3 part.

li[0]=(light[0]*temp[0])+(light[1]*temp[4])+(light[2]*temp[8]) + temp[12];
li[1]=(light[0]*temp[1])+(light[1]*temp[5])+(light[2]*temp[9]) + temp[13];
li[2]=(light[0]*temp[2])+(light[1]*temp[6])+(light[2]*temp[10]) + temp[14];

For the camera position, too.

I´m not sure, if you are correctly building your tangent matrix, have a look at: http://www.opengl.org/developers/code/sig99/advanced99/notes/node140.html

Diapolo

[This message has been edited by Diapolo (edited 03-11-2002).]

That could be why … thanks i will try it now, and clean up my code.

hmm well i’ve fixed it to use a proper tangent matrix, but it is still doing it … this might sound like a silly question… BUT (i am using glulookat) should i get the modelview matrix before or after i set my ‘camera’?

In the demo “practical and robust bumpmapping” from nVidia the modeling transform is used (not the viewing part!!).

hope this helps

dUckmAnn

First you do your translations and rotations and AFTER that you save the modelview matrix in order to process data with it.
Are you sure your current matrix is the modelview matrix?

Diapolo