normal mapping in whatever space

OK, I’ve searched the internet far and wide but I am still confused on tangent space normal mapping. The lighting itself seems correct (as in objects are lit appropiatly) but they are flat (as in not bump mapped). Here is some psuedo code as to what I’m currently doing.

-update camera with glulookat
-send the light position to the shader (I’m using nvidia’s cg)
-render the objects

And in the shader I’m doing the following.

-subtract the pixel position from the light position (note that the light is in whatever space it is originally in)
-then mul that vector with the tangent matrix found with T, B, and N
-and proceed with diffuse lighting as normally

Does anybody have any clue what I’m doing wrong. I’m positive T, B, and N are computed correctly as is my normal map. Thank you in advance for any help.

When you say “mul”, don’t you really mean “dot”?

mul is the function used in nvidia’s cg language. I’m not using register combiners if that is what you’re thinking.

Hmm the normal mapping i implemented was in object space and thus saved me quite some headache concerning tangent space :slight_smile: .
The only thing that has to be done to do it in object space is transforming the halfway vector and the light direction into it.

btw what do you mean with looking flat? if the model seems correctly lit, but only flat, then is there a possibility, that you arent using the normals stored in your normal map to compute the lighting?

Is that all you are doing ? What do you mean by “proceed with diffuse lighting as normally” ?

After you have transformed your light vector into tangent space, you still need to normalize it, do a texture lookup of your normal map (with a scale and bias if you’re storing the normals as RGB), then perform a DOT operation between this normal and the normalized light vector.

Y.

Ysaneya, when I say to “proceed with diffuse lighting as normally” I actually mean everything you just said. Sorry for the lack of clarification.

Chuck0, let me clarify the picture. The lighting looks correct, but the textures still appear flat. However, they are different, just wrong. Instead of black shadows and white highlights to fake the bump, there is nothing but black shadows. For example, a brick wall texture has each brick with a thin black outline. There is one place in the scene, a curved corridor ceiling that looks to be bumped mapped correctly. Yet this is the one and only one thing bumped mapped correctly.

And also Chuck0, is my light vector in object space? How do I get it into object space if it isn’t. Thank you guys, and sorry for the lack of clarification.

Originally posted by LiquidFlare:
mul is the function used in nvidia’s cg language. I’m not using register combiners if that is what you’re thinking.
No, you misunderstand. mul is a per element multiply, while dot is a dot product, which is the sum of all the per element multiplies, so to speak.
To clarify:-

vector = mul(a,b); // v.x=a.x*b.x; v.y=a.y*b.y; v.z=a.z*b.z;
scalar = dot(a,b); // scalar=a.x*b.x + a.y*b.y + a.z*b.z;

The dot product is not just some jargon keyword from register combiners, it’s a basic linear mathematical operation, fundamental to graphics.

Knackered, I already knew that. I’m a CS major at Georgia Tech so I know my math (at least to that extent). It is just the way he phrased it, plus the fact that I see most advanced lighting on the internet done with register combiners that I said that. Thanks though.

It sounds to me like the problem is in your first step: “-subtract the pixel position from the light position (note that the light is in whatever space it is originally in)”

The T, B, and N vectors will transform the light vector from model space to tangent space, but you have to put the light vector into model space first with something like this (sorry, I’m an arb_vp guy, so I don’t have a Cg example):

model space vector pointing to light0

PARAM mvi[4] = {state.matrix.modelview.inverse};
DP4 light0pos.x, mvi[0], state.light[0].position;
DP4 light0pos.y, mvi[1], state.light[0].position;
DP4 light0pos.z, mvi[2], state.light[0].position;
SUB light0vec, light0pos, vertex.position;

then transform it into tangent space:

DP3 result.texcoord[1].x, light0vec, tangent;
DP3 result.texcoord[1].y, light0vec, binormal;
DP3 result.texcoord[1].z, light0vec, normal;

(I assume you’re using a point light source since you’re doing the subtract operation.)

mogumbo, I tried to “mul” the inverse modelview matrix with the light position, but that has not worked. The light looks fine far away, but when I get close the wall suddenly gets dark very quickly. Basically, the light is changing depending on the camera’s position. I think it might have something to do with glulookat, but I’m not sure. Thanks for your help though.

Hmmm. I don’t have any other ideas then. Would it help to post a screenshot?

I’m fairly certain you’re dealing with a coordinate space issue.

The model matrix (normally combined with the view matrix -> modelview matrix) takes coordinates from object space (what your models are specified in) and takes them to world space (ie, translating them somewhere, rotating them, scaling, etc).

The TBN matrix (or set of vectors, they can be thought of either way) take coordinates from object space and put them into tangent space.

Your light vector is in world space. You need to multiply it by the inverse model matrix in order to move it into object space.

Once it’s in object space, you do the following in a vertex shader (or in straight C and on the CPU instead of the GPU):

temp = objectSpaceLightPos - vertexPos;
texCoord.s = dot (sTangent, temp);
texCoord.t = dot (tTangent, temp);
texCoord.r = dot (normal, temp);

The tex coords (typically, unless you can’t spare the texture unit) are used to access texels from normalizer cube map, which are dot producted with your normal map.

Hope that helps!

I don’t have anywhere to host the picture right now. I’ll see if I can get some up though. To be more specific, from a distance the wall is lit correctly (but without the correct bump map effect; my third post explanation still applies). However, when I get close to the wall, it darkens almost completely, except for one detail I forgot to mention in my previous post. It is the bump mapped correctly. It just is not lit correctly (the bump mapping is hard to see, but it is there). I’ll try to get a screenshot if that explanation won’t cut it. Thanks for your time though.

How would I get the inverse model matrix? I know cg has a glstate for the inverse modelview matrix, but not the model matrix.

I calculate it by hand (I’m not using vertex shaders for my implementation), using a Matrix class I wrote.

You simply need to “undo” the operations done to the model matrix - that is, all the stuff after your camera positioning code but before you draw the model (things like glTranslatef to move the model into position, and glRotatef to rotate it, etc).

If you push the modelview matrix, then do the opposite of what you did to get the model into position, you will have the inverse model matrix you need, which can be retrieved with glGetFloatv or however.

Thanks, I’ll try to figure out how to get the model matrix when using glulookat. Thank you for all of the help so far.

This is all sounding too complicated. If you are setting the view matrix with gluLookAt before setting the light positions, then your light positions should be transformed into view space (which is the correct way to do lighting). Why do you need just the model matrix if your lights are in view space?

I couldn’t agree more with it being too complicated. I’ve been lost in this for one week now.

Also, the light position never changes. It is the same value every time. So, as long as I have it stationary, it doesn’t matter when it is sent to the cg fragment shader(I pass it in as a uniform parameter). Heck, it could even be hard coded in.

OK, here is the fragment shader if anyone here understands cg. The light is a point light. The actual light position being passed in is (0, 100, -10, 1). This is the value I would set normally using glLightfv if I wasn’t using cg shaders.

struct Light {
float4 position;
float3 ambient;
float3 diffuse;
float3 specular;
float quadratic;
};

void main(float2 inUV : TEXCOORD0,
float4 inPosition : TEXCOORD1,
float3 inT : TEXCOORD2,
float3 inB : TEXCOORD3,
float3 inN : TEXCOORD4,

	sampler2D inTexture : TEXUNIT0, 
	sampler2D inGloss : TEXUNIT1, 
	sampler2D inBump : TEXUNIT2, 

	out float4 outColor : COLOR, 

	uniform Light light)

{
float d = distance(inPosition.xyz, light.position.xyz);
float attenuation = 1 / (light.quadratic * d * d);

float3 N = tex2D(inBump, inUV);
N = (N - 0.5f) * 2;

float3x3 rotation = float3x3(inT, inB, inN);
float3 L = normalize(light.position.xyz - inPosition.xyz);
L = mul(rotation, L);

float diffuseFactor = max(dot(N, L), 0);
float3 diffuse = light.diffuse * diffuseFactor * attenuation;

float4 lightColor;
lightColor.rgb = light.ambient + diffuse;
lightColor.a = 1;
outColor = lightColor * tex2D(inTexture, inUV);

}

That looks alright to me, as long as inPosition and light.position are both in model space. Are you sure neither of those are in view space or world space?