Bump Mapping (Tangent Space) calculation problems

Hi,

After reading many papers and several people’s source code, I decided to take a shot at a simple bump mapping program. All I am attempting is to light a single triangle with a base map and a normal map (no specular or attenuation etc.)

However, my experience has not been successful, after about 5 hours I’ve concluded I’m just not very good at maths.

I’m pretty sure the problem lies either in the calculation of tangent space itself, or how I am transforming the lighting through spaces (because the triangle is lit, but from all the wrong angles).

What I am doing is calculating tangent space, once at each vertex. Then I am transforming the light into tangent space (again, once for each vertex).
Each of these stages is it’s own function, I
have put the code up (after a hasty geocities registration) here:
http://www.geocities.com/sdtmezz/

Any help is much appreciated, as I am lost as to what is going on. Please ask if anything in the code is unclear.

-Mezz

After stepping through the code a few times, I see that I’m getting obscurely large values during the light transform - the light’s position (in world space) is as follows:

x = 3
y = 0
z = 6

However, the tangent space light vector ends up having z values up as high as 756. A bit silly, but I can’t figure out quite why yet…

-Mezz

I think I see a problem with the transform to tangent space. What you need to do to transform to TS is this:

TSLight.x = dot( ObjSpaceLight, Tangnet );
TSLight.y = dot( ObjSpaceLight, Binormal );
TSLight.z = dot( ObjSpaceLight, Normal );

Wow, that kinda looks like my Cg code!

Anyway, I think that should do it.

-SirKnight

Oh and another thing. Make sure Tangent, Binormal, and Normal are all normalized before you use them to convert a vector to tangent space.

-SirKnight

OK, I think I get that - what I’m having problems with is this:

Do I make the vector between the surface vertex and the light position:

a) before I transform the light position into object space?

b) after I transform the light position into object space?

OK SirKnight, now I see what’s going on with your example and I’ve changed my code to reflect that.

I also found out that the green channel of the normal map was the wrong way round, so I’ve changed that now.

Basically it sort of looks alright now, but I have a niggling feeling I’m doing something wrong as the way the shadows follow the light doesn’t look that great…

I have uploaded a small .zip to the geocities page (it includes .exe & textures) and would like to know if it looks right/wrong to anybody else.

Apologies to ATi as it is their textures I have ripped off.

I’m using unextended GL 1.3 to do everything, so OpenGL 1.3 drivers are required I guess.
Hopefully people can tell me if the result looks about right for what I am doing (no cubemap normaliser or anything), or if there
is some other issue that needs addresing.

Once again, thanks SirKnight.

-Mezz

Incidently, this only works if I don’t do the #ifdef on LIGHT_TO_OBJECTSPACE, if I transform the light position into object space using the modelview inverse, then it doesn’t work anymore

This aggrevates me.

Perhaps there is something wrong with my matrix multiplication in the CalculateTangentSpace() routine, or is there something about this case that I don’t need to transform it into object space?

Any help is appreciated

-Mezz

You don’t have to transform to object space.
Instead, update tangent-space vectors for each frame (view-change).

Yes you need to transform to object space first. That’s the price for animated objects. I wouldn’t transform the tangent space vectors to eye space as the previous post suggested. This would be much more expensive for anything but the simplest geometry, but it would depend on whether you do this on the host or implement it in your vertex program.

The whole fragment lighting thing is an exercise in lateral thinking, taking light vectors from eye to tangent space to avoid transforming the vector fragments to eye space. The underlying principal is this transformation to tangent space so it shouldn’t surprise you that you have to go through inverse modelview first.

Yes as dorbie said you DO need to transform your vectors (light vectors) to object space first. The reason is your tangent, binormal, and normal vectors which make up the tangent space matrix is in object space themselves. It wouldnt make sence to multiply an eye space light vector with an object space tangent vector and such.

Ill need to check your demo in a sec to see if i can see anything that causes your prog to not work right when doing the eye-to-object space transform. That doesnt make any sence.

As for your other question, you need to transform the light vector to object space, then compute the surface-to-light vector. This is another example of you needing vectors to be in the same space before you do any math ops on them.

-SirKnight

To me the demo looks fine. Have you tried putting in a selfshadow component into your lighting model? Maybe that will make the shadows more convincing to you. Also i would suggest adding the normalization cubemap. It looks fine without it now, but later when you use more complex geometry other than a single quad you will need it.

-SirKnight

Oh also, have you tried using nvidia’s function to form the tangent space vectors? It’s in nv_algebra.cpp. I used that function against my own to make sure mine was correct. Maybe try that and see what happens.

-SirKnight

Thanks for all the replies.

I think that in the current case of my demo, there is no need to transform the light vectors into object space, since there is no object space - everything is in world space, the only transform I ever do is moving the view position 10 down the z-axis. I also modify the light position, but I update those coordinates directly, it’s not done with a glTranslate() command.

I can understand that if I’d done something like:

glPushMatrix();
glTranslatefv(someobject.pos);
DrawObject(someobject);
glPopMatrix();

Then in the DrawObject() routine, the calculation of light vectors would need to account for the translation (by multiplying by the inverse modelview).
Does that sound correct, or have I missed something?

The reason I say the above is that the demo (as SirKnight says) looks fine. But this is without the world->object space transform, which bothers me.

SirKnight:
I don’t really know how to do self-shadowing.
I will maybe put a normalisation cubemap in when I understand the theory behind cubemapping. Part of the reason I never wrote a bump demo (before now) is because I didn’t think I understood the theory. Turns out I don’t anyway

-Mezz