Transformation order

Hey guys,

I’m implementing my own transformation functions to replace the provided ones, and I’m a little confused by the ordering that transformations must take place.

Here’s how I’m doing my transformations:

translate to where I want the object
scale it
rotate it
translate to -x, -y and -z, where x, y and z is the geometric centre of my object. (this is to make sure my object is rotated it’s centre).

I’m pretty sure I’m doing this in the right order, but what I’m getting is an object that “orbits” the centre of the screen, instead of just rotating. Does anyone have any idea where I should start looking to find my problem?

Which side are you multiplying the matrices on?

Do you mean row-major or column-major? If so, I’m doing row-major.

No, I mean which side are you multiplying the matrices on?

For example, let’s call your “translate to where I want the object” matrix A, and “scale it” matrix B. Do you do:

C = A * B

or:

C = B * A

Also, you say that your matrices are row-major. Does that mean that you consider the rows of the matrices to be the basis vectors and translations of the space, or is row-major simply referring to how you’re storing the values in the float-array? The best way to answer this is to say what indices in the floating-point array you put your translations into. Is it indices 13, 14, and 15, or 4, 8, and 12?

Oh! Here’s the code, I think that’s easiest:


model *= Mat4::translate(curX, curY, z);
model *= Mat4::scale(scale, scale, scale);
model *= Mat4::rotateX(angleX);
model *= Mat4::rotateY(angleY);
model *= Mat4::rotateZ(angleZ);
model *= Mat4::translate(-x, -y, -z);

Where I’ve overloaded the *= and * operators as so:


Mat4& Mat4::operator*=(Mat4 m)
{
	*this = *this * m;
	return *this;
}

Mat4 Mat4::operator*(const Mat4& m)
{
	return Mat4::mul(*this, m);
}

Mat4 Mat4::mul(const Mat4& m, const Mat4& n)
{
	Mat4 r(0.0);
	for (int row=0; row < 4; row++)
		for (int column=0; column < 4; column++)
			for (int i=0; i < 4; i++)
				r(row, column) += m(row, i) * n(i, column);
	return r;
}

So I’m doing C = A * B, I’m pretty sure.

As for your second question, indices 13, 14 and 15 are where my translations are.

Then you can do one of two things.

1: Reverse all of your matrix operations. Rather than A * B, do B * A.

2: Use the mathematical standard conventions. That is, put the basis vectors in the columns of your matrices rather than rows the way they are now. So instead of using 13, 14, and 15 for your translations, you use 4, 8, and 12. Effectively, compute the transpose of the matrices you’re currently generating. You don’t have to change your matrix multiplication routines or anything else.

I strongly suggest the latter. That way, when you look online at matrix math equations, you won’t have to reverse the order of transforms to make your code work right.

Hmm I think that due to time constraints, in this instance I’m going to have to go with reversing all of my matrix operations.

Definitely going to go with option two from now on though. Like they always say, you (hopefully) never make the same mistakes twice!

Just to be sure, when you’re saying that I should reverse all my matrix operations, are you saying that in my code above, I should be doing “model = (translate) * model” instead of “model = model * (translate)”?

There is no fixed order, it depends on your circumstances. There are application conventions but OpenGL does not set these.

An aircraft might position roll, pitch, heading, then translate x,y,z. The order they occur in depends on how your matrix is built by the application both in OpenGL and in any other matrix library.

On the other hand if you consider a naive version of an orbiting body you would translate by orbital distance then rotate to the correct position (assuming a circular orbit).

The exact order of operations depends on whether you are pre or post multiplying your matrices in your code and the implementation of that math depends on whether your matrices are transposed.

Then there is also the model vs. view inverse issue…

You need to either change the order of operations, transpose for the multiplication or change your math to premultiply when concatenating the transforms (switch matrix odred in teh individual multiply calls).

All are valid approaches, but you are talking about YOUR application conventions, not OpenGL. This is a somewhat arbitrary convention set by your code and understood in your brain.

One key thing when transforming top down (root to leaf) you generally have a premultiplication implementation, which allows you to build hierarchies and traverse the tree from root to leaf applying transforms as you go, so you actually apply translate x,y,z and then rotate heading, rotate pitch, rotate roll, then scale. In order for you to apply transforms in this order you must implement premultiplication of transformations in your matrix library. (and again this affected by whether you are transoposed or not).

Switching to matrix premultiplication is a matter of switching the order of the new incremental matrix to the accumulated matrix in the multiply (concatenation) operation.

Once again these are 3D application conventions.

The only valid reason for preferring column-major order so far as I’m concerned is that 99% of OpenGL code uses it; your own code quite likely also used it (unknown to you) before you made this transition. By preferring column-major you’re going to find it easier to integrate code from samples and tutorials, as well as easier to port your own code to your new functions (especially true during the co-existence period). This is a pretty strong reason.

The only valid reason for preferring row-major order is that it maps directly to a 2D C/C++ array. If you want to access a matrix as a 2D array, row-major is inherently more intuitive.

Other than that it doesn’t matter which you use so long as you’re consistent about it. Neither is more or less “correct” than the other; they’re just conventions. One may be a long-lasting convention in one particular realm, but it remains a mere convention, not a rule. So pick your order and use it consistently throughout your code, know the rules that apply to it’s use, and know how and when to recognise that you’re being faced by code or a formula that assumes the other order.

The only valid reason for preferring row-major order is that it maps directly to a 2D C/C++ array.

So does column-major. They’re both just arrays; the “major” distinction is all about how you interpret the data in that array.

Now that’s just being nitpicky. :wink:

We all know that when you use float blah[4][4] to represent a matrix then each ‘row’ in the array maps exactly to a row in a row-major matrix. Column-major breaks this mapping and can be unintuitive if the programmer is representing matrixes this way. Did I really have to spell it out exactly?

We all know that when you use float blah[4][4] to represent a matrix then each ‘row’ in the array maps exactly to a row in a row-major matrix. Column-major breaks this mapping and can be unintuitive if the programmer is representing matrixes this way.

What is a “row” as far as a 2D array is concerned? Is “blah[0]” supposed to be the first row of the matrix? Because, as far as standard mathematical conventions are concerned, the first coordinate is the column, not the row. And C/C++ says nothing about what the first index is intended to represent; that’s purely a user convention.

If you use a float[4][4] according to standard mathematical conventions (ie: first coordinate is the column), then you will have a column-major matrix. If you use a row-first convention, you will have a row-major matrix.

Neither is more or less intuitive to a user, unless they come from a particular convention. Anyone with a strong mathematical background will assume the first coordinate is the column. Those who come from more programmatic backgrounds tend to automatically assume that the first coordinate is the row. But neither is right, though at least the mathematicians were first.

So no; neither maps better to a C/C++ multidimensional array.

Now that’s just false information. Check out your K&R - section 5.7 is quite explicit about C 2D arrays being row-major in multiple places. So C 2D arrays are most definitely row-major, and the column-major convention most definitely does not map to them. Can we stop nitpicking now?

Now that’s just false information. Check out your K&R - section 5.7 is quite explicit about C 2D arrays being row-major in multiple places.

It doesn’t matter what K&R says; the Specification for Programming Language C states what the layout of the data is and how it gets accessed, but it says nothing about the meaning. The first coordinate’s meaning is entirely up to the user’s conventions.

And that’s all the row-major/column-major distinction is about: the convention of what is a row and what is a column in a 16-float array.