Once and for all: correct lighting with tangent space and moving camera

I´ve tried hard to implement correct lighting/bumpmapping with the possibility of moving around in the world, but i´m lost.

Given by the app is:
-light position
-tangent
-bitangent
-normal
-camera postion if needed
-samplers (base & bump)

The vertex program should compute the following values and send them to fragment program:
-light direction
-viewing direction

The application should not
-compute any inverse matrices
-if possible no matrix stuff at all

I tried a way the Orange Book uses:

  
 n =  normalize(gl_NormalMatrix * gl_Normal);
 t =  normalize(gl_NormalMatrix * tangent); 
 b =  normalize(gl_NormalMatrix * bitangent);
 vec3 lVec = light_pos;
 light_dir.x = dot(t, lVec);
 light_dir.y = dot(b, lVec);
 light_dir.z = dot(n, lVec);

 vec3 vVec = vec3(gl_ModelViewMatrix * gl_Vertex);
 view_dir.x = dot(t, vVec);
 view_dir.y = dot(b, vVec);
 view_dir.z = dot(n, vVec);

result:
A nearly totally lit sphere from all sides, only the edges are black if the view-angle is big.

Then I tried different (totally min. 10) other “possibilities”, with mat3 multiplications, gl_Vertex - or gl_ModelViewMatrix - or gl_ModelViewProjectionMatrix - computations; no luck.

Then I found this in a demo from Nvidia:

attribute vec4 position;
attribute mat3 tangentBasis;
attribute vec2 texcoord;

uniform vec3 light;
uniform vec3 halfAngle;
uniform mat4 modelViewI;

varying vec2 uv;
varying vec3 lightVec;
varying vec3 halfVec;
varying vec3 eyeVec;

void main()
{
    // output vertex position
    gl_Position = gl_ModelViewProjectionMatrix * position;

    // output texture coordinates for decal and normal maps
    uv = texcoord;

    // transform light and half angle vectors by tangent basis
    lightVec = light * tangentBasis;
    halfVec = halfAngle * tangentBasis;
 
    eyeVec = modelViewI[3].xyz - position.xyz;
    eyeVec = eyeVec * tangentBasis;
}
  

But, as I said above, I don´t want any matrix-stuff in my app.

Then i actually wanted to post this Humus´portaldemo-shader like code and wanted to say that it does not work for me too:

 
n =  normalize(gl_NormalMatrix * gl_Normal);
t =  normalize(gl_NormalMatrix * tangent); 
b =  normalize(gl_NormalMatrix * bitangent);

vec3 lVec = light_pos - gl_Vertex.xyz;
light_dir.x = dot(t, lVec);
light_dir.y = dot(b, lVec);
light_dir.z = dot(n, lVec);

vec3 vVec = cam_pos - gl_Vertex.xyz;
view_dir.x = dot(t, vVec);
view_dir.y = dot(b, vVec);
view_dir.z = dot(n, vVec);
 

I saw a lit sphere with a black spot in the middle wich was moving with me around.

But while writing this post, i found out, that Humus is not making any computation with gl_NormalMatrix, so i deleted this, tested it, and voila, it works!!!
Nevertheless i dicided to post this, because of two things:

  1. Is there a mistake in the Orange Book´s code?
  2. actually i need a directional light. I changed the code above and don´t substract ‘gl_Vertex.xyz’
    from ‘light_pos’
    I have a correct half-side bright sphere now, but it seems that the light direction is some kind of disturbed, beacause the lit side is more on top of the sphere with a light_pos of (0.0, 0.0, 1.0).
    So eighter this way isn’t totally correct too or there is another thing in my code i´m not thinking about at the moment.

Thank you for helping (and reading this long text of course :wink: )

It seems like with

 
attribute vec3 tangent;
attribute vec3 bitangent;

uniform vec3 cam_pos;
uniform vec3 sun_pos;
//uniform
vec3 moon_pos = vec3(0.0, 0.0, 0.0);	

varying vec3 sunlight_dir;
varying vec3 moonlight_dir;
varying vec3 view_dir;

void main() {
 gl_Position  = gl_ModelViewProjectionMatrix * gl_Vertex;
 gl_TexCoord[0] = gl_MultiTexCoord0;
 vec3 n = gl_Normal;
 vec3 t = tangent; 
 vec3 b = bitangent;
 vec3 slVec = sun_pos- gl_Vertex.xyz;
 sunlight_dir.x  = dot(t, slVec);
 sunlight_dir.y  = dot(b, slVec);
 sunlight_dir.z  = dot(n, slVec);

 vec3 mlVec = moon_pos - gl_Vertex.xyz;
 moonlight_dir.x = dot(t, mlVec);
 moonlight_dir.y = dot(b, mlVec);
 moonlight_dir.z = dot(n, mlVec);

 vec3 vVec  = cam_pos - gl_Vertex.xyz;
 view_dir.x      = dot(t,  vVec);
 view_dir.y      = dot(b,  vVec);
 view_dir.z      = dot(n,  vVec);
}
 

view_dir is still incorrect, specular lighting behaves strangely :frowning:

You also need the local modelview of the model (assuming you light in eye space).

Typically, what will happen is this:

  1. you sample the normal map
  2. you transform the normal into model space by using the normal, tangent and bitangent
  3. you transform the model space normal into eye space using the modelview transform (or a pre-calculated inverse transpose thereof, if necessary) and then normalize (as N-T-B will likely be skewed).
  4. you calculate the light vector, and the eye vector, and normalize both of them (!) – these vectors could be interpolated, and just normalized in the fragment program, if you want
  5. you do lighting

Variations include multi-sampling a bump map to construct a normal in model space (given N, T and B), or transforming the light from model space into tangent space before lighting. The latter has the problem that you have to specify light position in model space for each new modelview, and it’s harder to do reflection mapping in tangent space to boot. You can also do a hybrid where you take the normal to model space, but use a supplied model-space light position rather than going all the way to eye space.

Note that in your initial code, transforming viewDir with the modelview matrix will translate as well as rotate it – is that what you want? Probably not, if it’s an infinite viewer.

I don´t really understand your post, sorry, i´m a beginner…
Which normal do you mean with “2) you transform the normal into model space by using the normal, tangent and bitangent”? The normal read from fragment shader? But this IS in tangent space already, isn´t it?
Or is tangent space != model space?
But with “using the normal, tangent and bitangent” I transform into tangent space, so I don´t understand.
But thanks, though.
Hm, I really would like to have a simple correction of my last-posted, current code.
I tried to multiply the light- and cam-pos with gl_Modelviewmatrix before substracting gl_Vertex, but no luck again…

Would be nice if Humus could tell me what trick he uses while passing cam position and light position, or what other mistake i made…

You need to ensure that you are doing the lighting calculation with vectors in the same space.

Object or Model space is the coordinate system that your mesh data is typically supplied in. So the figures you supply with glVertex, glNormal etc.

Tangent or Texture space is the coordinate system of your texture maps, and consequently bump map (if one of those chalky blue types)

The normal, tangent and bitangent are in effect combined to form a matrix that transforms object space vectors/points into tangent space vectors/points.

Therefore you need to supply all vectors in object space for multiplication by this matrix.

So your eye position and your light position must be either supplied in or converted to object space.

How you do this will depend upon which space they are specified in by your app. I think that unless you are already specifying these in object space (unlikely) you are going to have to either supply your shader with conversion matrices or supply the shader with object space positions (converted by the aforementioned conversion matrices in your app).

You can of course do these calculations in any space you wish, but almost always you will have to convert some of your data from one space to another.

I’m afraid I don’t use GLSL so am not best placed to deal with the specifics.

Er does that make it any clearer? I do strongly suggest getting your head around different coordinate spaces, this sort of thing becomes much more obvious once you do.

Matt

Yes it DID helped me a lot!
Thanks!
But a few questions i have though:
If i would multiply the camera and light positions, that are like you said likely not in object, but in world space (?) with the modelviewmatrix of the current object, i transformed them to eye space, which is not correct?
I simply have to add the model coordinates to the positions of light and camera, wich would be equivatent to a multiplication with the modelmatrix, and then they are in object space?

But why doen´t it work, if i put my (the only one in scene) object in (0, 0, 0)? Then world space and object space should be the same, shouldn´t it?

You see, i´m still a bit confused
:wink:
Hope you or someone else answers soon.

>>But why doen´t it work, if i put my (the only one in scene) object in (0, 0, 0)? Then world space and object space should be the same, shouldn´t it?<<

space doesnt mean position it means more orientation, thus stiucking both at 0,0,0 wont work.
there is a lot of info on the web that describes this much better than i can, eg check out the bumpmapping pdf’s at nvidia

Yes typically lights and cameras would be stored in world space. When your object is at 0,0,0 with no rotation (the identity matrix) then object and world space would ordinarily be the same.

I think that your extraction of the eye position would work in this circumstance, but as soon as you move the object it won’t work. I will explain how I do it in my software.

Lights, cameras, and meshes in the scene have world space transforms (rotation, translation).

For any one mesh its transform is the World transform.

The transform on the camera is the View transform. To get the eye position the positional information from the matrix gives the eye position in world space. To transform to object space I multiply by the World transform. This is supplied to the app as the object space eye position. The light is dealt with similarly.

Matt

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.