normals and matrixes

Hi,

In my application I manually apply some matrices to my vertex data so I in a lot of cases can avoid to change the modelview matrix at rendering time.

All this works fine for vertexes. But the matrix is applied to both normals and vertexes. Naturally, translations should not be applied to the normals (only scaling and rotations should be applied), but how do I do that? Can I somehow apply the matrix without the effects of any translation doing some math trick or must I maintain two versions of the matrix: one with and one without translations applied.

There must be a way to solve this issue; afterall OpenGL does so internally.

Err, just multiply your vector by the upper left 3x3 of your 4x4 matrix - it just means adding another function to your matrix class (MultVecByMat3x3())…or have I missed your point?

I don’t think you have missed the point.

But what you suggest will not work if, say, the bottom right cell is not 1. I.e. it will only work if I only use rotate,uniform scale,shear and translate transforms.

Hmm, while examining your suggestion I came across a suggestion in Foley’s: “Computer Graphics Principles and Practice” saying that to do what I want I need to calculate:

N’ = t(inv(M)) * N

where inv() inverts a matrix, and t transposes it. This should be used to transform planes equations (i.e. normals)

So now I just need a routine to invert a 4x4 matrix.

sigh…

But thanks for the suggestion. It lead me in the right direction.

Inverting a rotation matrix is just a transpose.

Yes I know. But I need to invert a composed general matrix; not just the component matrixes such as rotation,scale or translate. For this the correct way is to use gaussian reduction.

Here is a home page showing it (this one is in Java):
http://www.nauticom.net/www/jdtaft/JavaMatrix.htm

Cheers,
Jacob

So, in case somebody else also has this problem, they can read more about it in the Redbook (3rd edition p.671).

To invert a matrix that can be loaded with glLoadMatrix do this:

// Returns true at success or false if the matrix was singular.
bool invert(float D)
{
const int n = 4;
const int n2 = 2
n;
const int nn = n*n;

float alpha;
float beta;
int i;
int j;
int k;

float D[n2*n];
for (i=0;i<nn;++i)
  D[i]=m[i];

// init the reduction matrix
  for( i = 0; i < n; i++ )
  {
	  for( j = 0; j < n; j++ )
	  {
	    D[i+n*j+nn] = 0.0;
	  }
	  D[i+n*i+nn] = 1.0;
  }

// perform the reductions
  for( i = 0; i < n; i++ ) // For each row
  {
	  alpha = D[i*n+i]; // Get diagonal value

  // Make sure it is not 0. If so the matrix is singular and we will not
  // invert it.
  // A non-singular matrix is one where inv(inv(A)) = A
  if(!alpha)
    return false;

  // For each column in this row divide though with the diagonal value so
  // so alpha becomes 1.
  for( j = 0; j < n2; j++ )
	  {
		  D[i+n*j] /= alpha;
	  }

  // Update all other rows.
	  for( k = 0; k < n; k++ )
	  {
		  if( (k-i) != 0 )
		  {
			  beta = D[k+n*i];
			  for( j = i; j < n2; j++ ) 
			  {
				  D[k+n*j] -= beta*D[i+n*j];
			  }
		  }
	  }
  }
  
return true;

}

and to transpose a matrix do this:

void transpose(float *m)
{
float temp;
temp = m[1]; m[1] = m[4]; m[4] = temp;
temp = m[2]; m[2] = m[8]; m[8] = temp;
temp = m[3]; m[3] = m[12]; m[12] = temp;
temp = m[6]; m[6] = m[9]; m[9] = temp;
temp = m[7]; m[7] = m[13]; m[13] = temp;
temp = m[11]; m[11] = m[14]; m[14] = temp;
}

When transforming the normal one must remember that it is only the direction of the final plane that we are interested in (A,B,C); not the offset D. So when multiplying with t(inv(M)) make sure to discard the resulting w of the matrix without dividing with it first.

I hope this helps other people in the same situation.

Jacob

I should maybe mention that the invert() code above expects that D is an array of size 32. The input matrix is found in index 0-15 and the result in 16-31.