PDA

View Full Version : normal mapping in whatever space



LiquidFlare
07-22-2004, 06:12 PM
OK, I've searched the internet far and wide but I am still confused on tangent space normal mapping. The lighting itself seems correct (as in objects are lit appropiatly) but they are flat (as in not bump mapped). Here is some psuedo code as to what I'm currently doing.

-update camera with glulookat
-send the light position to the shader (I'm using nvidia's cg)
-render the objects

And in the shader I'm doing the following.

-subtract the pixel position from the light position (note that the light is in whatever space it is originally in)
-then mul that vector with the tangent matrix found with T, B, and N
-and proceed with diffuse lighting as normally

Does anybody have any clue what I'm doing wrong. I'm positive T, B, and N are computed correctly as is my normal map. Thank you in advance for any help.

Humus
07-22-2004, 07:32 PM
When you say "mul", don't you really mean "dot"?

LiquidFlare
07-22-2004, 08:54 PM
mul is the function used in nvidia's cg language. I'm not using register combiners if that is what you're thinking.

Chuck0
07-22-2004, 10:08 PM
Hmm the normal mapping i implemented was in object space and thus saved me quite some headache concerning tangent space :) .
The only thing that has to be done to do it in object space is transforming the halfway vector and the light direction into it.

btw what do you mean with looking flat? if the model seems correctly lit, but only flat, then is there a possibility, that you arent using the normals stored in your normal map to compute the lighting?

Ysaneya
07-22-2004, 11:22 PM
Is that all you are doing ? What do you mean by "proceed with diffuse lighting as normally" ?

After you have transformed your light vector into tangent space, you still need to normalize it, do a texture lookup of your normal map (with a scale and bias if you're storing the normals as RGB), then perform a DOT operation between this normal and the normalized light vector.

Y.

LiquidFlare
07-23-2004, 02:14 PM
Ysaneya, when I say to "proceed with diffuse lighting as normally" I actually mean everything you just said. Sorry for the lack of clarification.

Chuck0, let me clarify the picture. The lighting looks correct, but the textures still appear flat. However, they are different, just wrong. Instead of black shadows and white highlights to fake the bump, there is nothing but black shadows. For example, a brick wall texture has each brick with a thin black outline. There is one place in the scene, a curved corridor ceiling that looks to be bumped mapped correctly. Yet this is the one and only one thing bumped mapped correctly.

And also Chuck0, is my light vector in object space? How do I get it into object space if it isn't. Thank you guys, and sorry for the lack of clarification.

knackered
07-23-2004, 03:49 PM
Originally posted by LiquidFlare:
mul is the function used in nvidia's cg language. I'm not using register combiners if that is what you're thinking.No, you misunderstand. mul is a per element multiply, while dot is a dot product, which is the sum of all the per element multiplies, so to speak.
To clarify:-

vector = mul(a,b); // v.x=a.x*b.x; v.y=a.y*b.y; v.z=a.z*b.z;
scalar = dot(a,b); // scalar=a.x*b.x + a.y*b.y + a.z*b.z;The dot product is not just some jargon keyword from register combiners, it's a basic linear mathematical operation, fundamental to graphics.

LiquidFlare
07-23-2004, 03:53 PM
Knackered, I already knew that. I'm a CS major at Georgia Tech so I know my math (at least to that extent). It is just the way he phrased it, plus the fact that I see most advanced lighting on the internet done with register combiners that I said that. Thanks though.

mogumbo
07-23-2004, 06:04 PM
It sounds to me like the problem is in your first step: "-subtract the pixel position from the light position (note that the light is in whatever space it is originally in)"

The T, B, and N vectors will transform the light vector from model space to tangent space, but you have to put the light vector into model space first with something like this (sorry, I'm an arb_vp guy, so I don't have a Cg example):

# model space vector pointing to light0
PARAM mvi[4] = {state.matrix.modelview.inverse};
DP4 light0pos.x, mvi[0], state.light[0].position;
DP4 light0pos.y, mvi[1], state.light[0].position;
DP4 light0pos.z, mvi[2], state.light[0].position;
SUB light0vec, light0pos, vertex.position;

# then transform it into tangent space:
DP3 result.texcoord[1].x, light0vec, tangent;
DP3 result.texcoord[1].y, light0vec, binormal;
DP3 result.texcoord[1].z, light0vec, normal;

(I assume you're using a point light source since you're doing the subtract operation.)

LiquidFlare
07-23-2004, 06:49 PM
mogumbo, I tried to "mul" the inverse modelview matrix with the light position, but that has not worked. The light looks fine far away, but when I get close the wall suddenly gets dark very quickly. Basically, the light is changing depending on the camera's position. I think it might have something to do with glulookat, but I'm not sure. Thanks for your help though.

mogumbo
07-23-2004, 06:54 PM
Hmmm. I don't have any other ideas then. Would it help to post a screenshot?

PfhorSlayer
07-23-2004, 07:12 PM
I'm fairly certain you're dealing with a coordinate space issue.

The model matrix (normally combined with the view matrix -> modelview matrix) takes coordinates from object space (what your models are specified in) and takes them to world space (ie, translating them somewhere, rotating them, scaling, etc).

The TBN matrix (or set of vectors, they can be thought of either way) take coordinates from object space and put them into tangent space.

Your light vector is in world space. You need to multiply it by the inverse model matrix in order to move it into object space.

Once it's in object space, you do the following in a vertex shader (or in straight C and on the CPU instead of the GPU):

temp = objectSpaceLightPos - vertexPos;
texCoord.s = dot (sTangent, temp);
texCoord.t = dot (tTangent, temp);
texCoord.r = dot (normal, temp);

The tex coords (typically, unless you can't spare the texture unit) are used to access texels from normalizer cube map, which are dot producted with your normal map.

Hope that helps!

LiquidFlare
07-23-2004, 07:18 PM
I don't have anywhere to host the picture right now. I'll see if I can get some up though. To be more specific, from a distance the wall is lit correctly (but without the correct bump map effect; my third post explanation still applies). However, when I get close to the wall, it darkens almost completely, except for one detail I forgot to mention in my previous post. It is the bump mapped correctly. It just is not lit correctly (the bump mapping is hard to see, but it is there). I'll try to get a screenshot if that explanation won't cut it. Thanks for your time though.

LiquidFlare
07-23-2004, 08:15 PM
How would I get the inverse model matrix? I know cg has a glstate for the inverse modelview matrix, but not the model matrix.

PfhorSlayer
07-23-2004, 08:59 PM
I calculate it by hand (I'm not using vertex shaders for my implementation), using a Matrix class I wrote.

You simply need to "undo" the operations done to the model matrix - that is, all the stuff after your camera positioning code but before you draw the model (things like glTranslatef to move the model into position, and glRotatef to rotate it, etc).

If you push the modelview matrix, then do the opposite of what you did to get the model into position, you will have the inverse model matrix you need, which can be retrieved with glGetFloatv or however.

LiquidFlare
07-23-2004, 11:21 PM
Thanks, I'll try to figure out how to get the model matrix when using glulookat. Thank you for all of the help so far.

mogumbo
07-24-2004, 06:01 AM
This is all sounding too complicated. If you are setting the view matrix with gluLookAt before setting the light positions, then your light positions should be transformed into view space (which is the correct way to do lighting). Why do you need just the model matrix if your lights are in view space?

LiquidFlare
07-24-2004, 08:25 AM
I couldn't agree more with it being too complicated. I've been lost in this for one week now.

Also, the light position never changes. It is the same value every time. So, as long as I have it stationary, it doesn't matter when it is sent to the cg fragment shader(I pass it in as a uniform parameter). Heck, it could even be hard coded in.

LiquidFlare
07-24-2004, 08:39 AM
OK, here is the fragment shader if anyone here understands cg. The light is a point light. The actual light position being passed in is (0, 100, -10, 1). This is the value I would set normally using glLightfv if I wasn't using cg shaders.

struct Light {
float4 position;
float3 ambient;
float3 diffuse;
float3 specular;
float quadratic;
};

void main(float2 inUV : TEXCOORD0,
float4 inPosition : TEXCOORD1,
float3 inT : TEXCOORD2,
float3 inB : TEXCOORD3,
float3 inN : TEXCOORD4,

sampler2D inTexture : TEXUNIT0,
sampler2D inGloss : TEXUNIT1,
sampler2D inBump : TEXUNIT2,

out float4 outColor : COLOR,

uniform Light light)
{
float d = distance(inPosition.xyz, light.position.xyz);
float attenuation = 1 / (light.quadratic * d * d);

float3 N = tex2D(inBump, inUV);
N = (N - 0.5f) * 2;

float3x3 rotation = float3x3(inT, inB, inN);
float3 L = normalize(light.position.xyz - inPosition.xyz);
L = mul(rotation, L);

float diffuseFactor = max(dot(N, L), 0);
float3 diffuse = light.diffuse * diffuseFactor * attenuation;

float4 lightColor;
lightColor.rgb = light.ambient + diffuse;
lightColor.a = 1;
outColor = lightColor * tex2D(inTexture, inUV);
}

mogumbo
07-24-2004, 11:17 AM
That looks alright to me, as long as inPosition and light.position are both in model space. Are you sure neither of those are in view space or world space?

LiquidFlare
07-24-2004, 11:59 AM
That's exactly what I'm unsure of. I honestly have no idea what space they are in. I think that they are in object space, but I'm not sure one bit. If they are in the correct space, therefore making this code correct, then the one other possible error would be in how I compute the tangent and binormal, which I doubt. Here is the code I use to compute T and B from the N I recieved from the modeling program (Maya) I use. The bottom section of the code sets up the vertex buffer objects if it looks strange to anyone. Also, for some reason I can't post this with the "less than" sign in the text, so I have replace all "less than signs" in the code with the word lessthan. The parameters are a list of each objects vertices and uvs, as well as the total object count and the number of vertices in each object. Vertices are x, y, z and UVs are u, v.

void TangentMatrix::computeMatrix(float **vertices, float **uvs,
int objCount, int *vertCount)
{
float **tangents, **binormals;
tangents = new float*[objCount];
binormals = new float*[objCount];

// compute the data
for(int i = 0 ; i lessthan objCount ; i++)
{
tangents[i] = new float[vertCount[i] * 3];
binormals[i] = new float[vertCount[i] * 3];
for(int j = 0 ; j lessthan vertCount[i] * 3; j += 9)
{
float deltaX2 = vertices[i][j + 3] - vertices[i][j];
float deltaY2 = vertices[i][j + 4] - vertices[i][j + 1];
float deltaZ2 = vertices[i][j + 5] - vertices[i][j + 2];
float deltaX3 = vertices[i][j + 6] - vertices[i][j];
float deltaY3 = vertices[i][j + 7] - vertices[i][j + 1];
float deltaZ3 = vertices[i][j + 8] - vertices[i][j + 2];

float deltaU2 = uvs[i][((j / 3) * 2) + 2] - uvs[i][(j / 3) * 2];
float deltaV2 = uvs[i][((j / 3) * 2) + 3] - uvs[i][((j / 3) * 2) + 1];
float deltaU3 = uvs[i][((j / 3) * 2) + 4] - uvs[i][(j / 3) * 2];
float deltaV3 = uvs[i][((j / 3) * 2) + 5] - uvs[i][((j / 3) * 2) + 1];

Vector3 one = Vector3(deltaX2, deltaU2, deltaV2);
one = one.cross(Vector3(deltaX3, deltaU3, deltaV3));

Vector3 two = Vector3(deltaY2, deltaU2, deltaV2);
two = two.cross(Vector3(deltaY3, deltaU3, deltaV3));

Vector3 three = Vector3(deltaZ2, deltaU2, deltaV2);
three = three.cross(Vector3(deltaZ3, deltaU3, deltaV3));

float Tx = 0.0f, Ty = 0.0f, Tz = 0.0f,
Bx = 0.0f, By = 0.0f, Bz = 0.0f;
if(one.x != 0)
{
one.normalize();
Tx = -one.y / one.x;
Bx = -one.z / one.x;
}
if(two.x != 0)
{
two.normalize();
Ty = -two.y / two.x;
By = -two.z / two.x;
}
if(three.x != 0)
{
three.normalize();
Tz = -three.y / three.x;
Bz = -three.z / three.x;
}

Vector3 T = Vector3(Tx, Ty, Tz);
Vector3 B = Vector3(Bx, By, Bz);
T.normalize();
B.normalize();

tangents[i][j] = T.x;
tangents[i][j + 1] = T.y;
tangents[i][j + 2] = T.z;

tangents[i][j + 3] = T.x;
tangents[i][j + 4] = T.y;
tangents[i][j + 5] = T.z;

tangents[i][j + 6] = T.x;
tangents[i][j + 7] = T.y;
tangents[i][j + 8] = T.z;

binormals[i][j] = B.x;
binormals[i][j + 1] = B.y;
binormals[i][j + 2] = B.z;

binormals[i][j + 3] = B.x;
binormals[i][j + 4] = B.y;
binormals[i][j + 5] = B.z;

binormals[i][j + 6] = B.x;
binormals[i][j + 7] = B.y;
binormals[i][j + 8] = B.z;
}
}

// build the vertex buffer objects
g_uiTangents = new unsigned int[objCount];
g_uiBinormals = new unsigned int[objCount];

for(int i = 0 ; i lessthan objCount ; i++)
{
glGenBuffersARB(1, &g_uiTangents[i]);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, g_uiTangents[i]);
glBufferDataARB(GL_ARRAY_BUFFER_ARB, vertCount[i] * 3 * sizeof(float), tangents[i], GL_STATIC_DRAW_ARB);

glGenBuffersARB(1, &g_uiBinormals[i]);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, g_uiBinormals[i]);
glBufferDataARB(GL_ARRAY_BUFFER_ARB, vertCount[i] * 3 * sizeof(float), binormals[i], GL_STATIC_DRAW_ARB);

delete[] tangents[i];
delete[] binormals[i];
}
}

LiquidFlare
07-26-2004, 03:20 PM
After some messing around with this, I believe PfhorSlayer is correct. The only problem now is that I don't know how to reverse glulookat properly. Anyone know how to correctly reverse it? Thanks for all your help so far guys.

mogumbo
07-26-2004, 06:36 PM
This still sounds overcomplicated to me. Why not define your light as a glLight? Then your light position will be delivered to your shader in view space instead of model space. That will allow you to transform the light position to model space with the inverse modelview matrix, and you won't have to mess with world space at all.

LiquidFlare
07-26-2004, 09:35 PM
Tried that and it didn't change anything.

plasmonster
07-27-2004, 09:10 AM
Here's yet another way to do it:

First, in the vertex program


...

// Vertex to light vector
float3 lightVec = lightPosWorld.xyz - vertexPosWorld.xyz;

// Per-vertex tangent basis
// Note that if you have transformations
// on the model, then you need to xform this
// basis with the inverse transpose of the
// matrix that moved the model in the world
// Just assume an identity here for the world...
float3x3 worldToTangent = float3x3( T, B, N );

// Send tangent-space lightVec to fragment
// program as texture coord
lightVecTan = float4( mul(worldToTangent,lightVec), 1 );

...In the fragment program


...

// Grab the bump normal
float4 bump = 2*tex2D(bump, texCoord0.xy) - 1;

// Fast normalize of lightVec in normalization
// cube (normalize() will do too)
float4 lightVecTan = 2*texCUBE( normalCube, texCoord?.xyz ) - 1;

// Diffuse dot product in tangent-space
float diffuseDot = dot( lightVecTan.xyz, bump.xyz );

...Hope this helps.

LiquidFlare
07-27-2004, 11:36 AM
Let me break it down one last time. This is the order of operation before moving into the vertex and fragment shaders.

- gluLookAt
- set the light (I could either send the position to the shader by passing it as a parameter or I could call glLightfv and look up the position in the shader with glstate, you choose).
- draw the scene using vertex buffer objects

1. What space does the POSITION semantic in cg give you for the vertex?
2. Depending on what light method you chose above, what space does the light position come in?
3. What matrix do I have to "mul" with the light position (or light vector) to get the light vector into object space, so that I may "mul" that vector with the tangent matrix to get it into tangent space?

Thank you for all your help so far. I'm just being confused since different people and different sources over the internet seem to say completely different things. I just haven't been able to get it right yet, and I still am not sure what is coming into the shader in what space. Thank you for your time.

plasmonster
07-27-2004, 12:30 PM
You can send the light position in world space by first pushing an identity onto the modelview stack before calling glLight(). By default, the GL transforms light positions with the current modelview matrix, just like points.


1. What space does the POSITION semantic in cg give you for the vertex?You calculate the position of the vertex in the program, using the modelview-projection combo (mvp). This gives you a position in homogeneous clip-space.


2. Depending on what light method you chose above, what space does the light position come in?That depends on how you specify the position, as mentioned above. You see, it doesn't matter which space you choose, as long as you're consistent about it, making sure all vectors are in the same space.

You can move the light into tangent-space, or move the tangent space into light-space, it's up to you. Just make sure everybody's in the same space.


3. What matrix do I have to "mul" with the light position (or light vector) to get the light vector into object space, so that I may "mul" that vector with the tangent matrix to get it into tangent space?Again, all this depends on what space you want to work in. The TNB takes a vector into tangent-space. This is convenient for bump-mapping, since that's where the bumps live.


I'm just being confused since different people and different sources over the internet seem to say completely different things.Everyone here is saying essentially the same thing, just from a different point of view, and with a different space in mind.

If you don't know what space things are in, things can get sticky indeed.

LiquidFlare
07-27-2004, 01:19 PM
OK, I think I got it now. Thank you everyone for all your help.