bumpMapping matrices kung-fu.

I’m currently writing an engine for iPHone 3GS and diving in openGL es 2.0 made me realize how weak my matrices kung-ku is.

I’ve followed the shaders example here : http://www.ozone3d.net/tutorials/bump_mapping_p4.php


[Vertex_Shader]
	
varying vec3 lightVec; 
varying vec3 eyeVec;
varying vec2 texCoord;
attribute vec3 vTangent; 
					 

void main(void)
{
	gl_Position = ftransform();
	texCoord = gl_MultiTexCoord0.xy;
	
	vec3 n = normalize(gl_NormalMatrix * gl_Normal);
	vec3 t = normalize(gl_NormalMatrix * vTangent);
	vec3 b = cross(n, t);
	
	vec3 vVertex = vec3(gl_ModelViewMatrix * gl_Vertex);
	vec3 tmpVec = gl_LightSource[0].position.xyz - vVertex;

	lightVec.x = dot(tmpVec, t);
	lightVec.y = dot(tmpVec, b);
	lightVec.z = dot(tmpVec, n);

	tmpVec = -vVertex;
	eyeVec.x = dot(tmpVec, t);
	eyeVec.y = dot(tmpVec, b);
	eyeVec.z = dot(tmpVec, n);
}
	

I have now a georgous iphone bumpMapping support for iphone 3GS but:

1/ It looks like the shader build a matrix cameraspace->tangentspace, instead of providing light position in model space and perform bumpMapping directly, this seems unefficient.
2/ Before modifying to the much faster gpwiki method ( Game Programming Wiki - GPWiki ), I’m trying to undestand exactly what is going on:


vec3 n = normalize(gl_NormalMatrix * gl_Normal);
vec3 t = normalize(gl_NormalMatrix * vTangent);
vec3 b = cross(n, t);

This is basically a matrix*matrix operation.

gl_NormalMatrix is a modelspace->cameraspace,
gl_Normal and vTangent are two compoennt of the tangent space matrix, which perform a modelspace->tangentspace transformation.

Thing I don’t get #1:

How comes doing:

modelspace->cameraspace * modelspace->tangentspace

builds a matrix that allow to do cameraspace–> tangentspace

Thing I don’t get #2:

When the light vector is rotated, it’s not done via a MatrixVector but the opposite: VectorMatrix.

Any help ?

1/ It looks like the shader build a matrix cameraspace->tangentspace, instead of providing light position in model space and perform bumpMapping directly, this seems unefficient.

A lot of people prefer doing lighting in camera space (also referred to as eye space). One reason being that the view direction is simple, usually looking down the negative z-axis.

Getting vertices into eye space is done by applying the modelview matrix. However, this is not true for normals. In fact, normals are transformed into eye space by multiplying gl_NormalMatrix, which is essentially the inverse of the rotation part of the modelview matrix (so inverse == transpose in this special case). This is what is happening to n, t and b, where b is computed from the other two and is thus automatically in eye space. It is not a matrix*matrix operation.

Thing I don’t get #1:

How comes doing:

modelspace->cameraspace * modelspace->tangentspace

builds a matrix that allow to do cameraspace–> tangentspace

Assuming that tangents are given in modelspace, the same space as the normals, what you are asking above is not true.

Thing I don’t get #2:

When the light vector is rotated, it’s not done via a MatrixVector but the opposite: VectorMatrix.

Not sure what you mean here. The lightVec is scaled by its “commonality” (the dot-product) with the three orthogonal vectors that form the coordinate system for each vertex.

tmpVec is the direction (in eye space) from the camera to the current vertex. It should probably be normalized.

gl_Normal & vTangent are given in the model space.

vec3 n = normalize(gl_NormalMatrix * gl_Normal);
vec3 t = normalize(gl_NormalMatrix * vTangent);
vec3 b = cross(n, t);

Now: (n,t,b) are (normal, tangent and bitangent) of a vertex in the camera space.

lightVec.x = dot(tmpVec, t);
lightVec.y = dot(tmpVec, b);
lightVec.z = dot(tmpVec, n);

Now: lightVec is a projection of tmpVec to the tangent space. The same goes for eyeVec.

P.S. Both your questions are about the order of operations. The reason why it works this way but not the other lies within the matrix representation in OpenGL.

Ok got it, thanks.

Column vs. row major representation and pre vs post multiplication, as stated.

It’s equivalent to a matrix * matrix, but it’s really a vector transformation (w = 0) into the desired space with the 3rd row implied and calculated as the cross product.

You can be more efficient in some circumstances but it often means more CPU matrix work and uniform setting. e.g. For many applications you can have an object space light and object space bump map and just use TBN without transformation. This is in fact the best behaved IMHO for reconstruction but you need 3 component signed bumpmaps, and per object light information if it’s under transformation. Of course you can have an identity model matrix for world stuff and then your uniforms change a lot less.

Object space bump maps are underused IMHO.