Does this function appear to be row-major?

I am currently using a matrix library which defines basic operations like this:

struct Matrix4{
    vec4 x;
    vec4 y;
    vec4 z;
    vec4 w;
..... functions
}

A vec4 is simply a struct with its data as:

T x;
T y;
T z;
T w;

(T is typically a float)

The key question is probably answered by this:

    Matrix4 operator * (const Matrix4& b) const
{
    Matrix4 m;
    m.x.x = x.x * b.x.x + x.y * b.y.x + x.z * b.z.x + x.w * b.w.x;
    m.x.y = x.x * b.x.y + x.y * b.y.y + x.z * b.z.y + x.w * b.w.y;
    m.x.z = x.x * b.x.z + x.y * b.y.z + x.z * b.z.z + x.w * b.w.z;
    m.x.w = x.x * b.x.w + x.y * b.y.w + x.z * b.z.w + x.w * b.w.w;
    m.y.x = y.x * b.x.x + y.y * b.y.x + y.z * b.z.x + y.w * b.w.x;
    m.y.y = y.x * b.x.y + y.y * b.y.y + y.z * b.z.y + y.w * b.w.y;
    m.y.z = y.x * b.x.z + y.y * b.y.z + y.z * b.z.z + y.w * b.w.z;
    m.y.w = y.x * b.x.w + y.y * b.y.w + y.z * b.z.w + y.w * b.w.w;
    m.z.x = z.x * b.x.x + z.y * b.y.x + z.z * b.z.x + z.w * b.w.x;
    m.z.y = z.x * b.x.y + z.y * b.y.y + z.z * b.z.y + z.w * b.w.y;
    m.z.z = z.x * b.x.z + z.y * b.y.z + z.z * b.z.z + z.w * b.w.z;
    m.z.w = z.x * b.x.w + z.y * b.y.w + z.z * b.z.w + z.w * b.w.w;
    m.w.x = w.x * b.x.x + w.y * b.y.x + w.z * b.z.x + w.w * b.w.x;
    m.w.y = w.x * b.x.y + w.y * b.y.y + w.z * b.z.y + w.w * b.w.y;
    m.w.z = w.x * b.x.z + w.y * b.y.z + w.z * b.z.z + w.w * b.w.z;
    m.w.w = w.x * b.x.w + w.y * b.y.w + w.z * b.z.w + w.w * b.w.w;
    return m;
}

In a column-major system, the operator* should assume that the top row of the matrix on the left is not in contiguous memory, since it stores the first element of each column vector. This function is clearly treating the first row of the left-hand matrix (the *this matrix) as contiguous, thus implying this is a row-major matrix library, right?

After looking at this stuff for so long, I gather that the only way to really determine if a library is column-major or row-major is to look how it handles multiplication with respect to how it lays out its data.

While not likely necessary to diagnose my question, the translation matrix the library provides for a 3d translation transformation places the x, y, z data in the final vec4. But this data could be in this location in either a column-major or row-major system so it’s likely a trivial piece of evidence.

After looking at this stuff for so long, I gather that the only way to really determine if a library is column-major or row-major is to look how it handles multiplication with respect to how it lays out its data.

Um, no. Matrix multiplication will look the same no matter the major ordering of the matrix.

If you look at the operator overload above, it wouldn’t make sense in a column-major layout; or at least it would be disceaving, right? Because if it was a column-major operation, the “left hand side” that calls the * would actually be treated as if it were on the right hand side.

After thinking about this more, I think this statement is misleading. You are correct that if you multiply matrices A * B, then you are creating a new matrix based on dot products of A’s rows with B’s columns, and this will always be true regardless of whether A or B are column-major/canonical or row-major/transposed.

But the actual implementation of matrix multiplication may need to depend on the knowledge of whether these matrices arrange their rows or columns as contiguous.

For example, if A is on the left side of A*B and is laid out in memory as a 2x2 matrix with a linear array of A={1,2,3,4}, and it is row-major, then we are taking a dot product using 1,2 as the row. But if it is column-major, we are taking a dot product using 1,3 as the row.

This is why I believe that if you look at how a matrix library implements its multiplication with regards to how it lays out its memory, it can be clear if the library expects row-major or column-major. In C++ if I write A*B using the operator function I posted earlier, then the multiplication is using data from A that is contiguous, and since A is on the left side, only its rows are being used in the dot products, therefore these rows are contiguous and hence row-major.

If instead you think of A as column-major, where its contiguous indices are columns and not rows, then A*B using the above function is not implemented correctly, since it using columns from the left-hand side (A) in the dot products, which is not how matrix multiplication is done.

If you look at the operator overload above, it wouldn’t make sense in a column-major layout; or at least it would be disceaving, right? Because if it was a column-major operation, the “left hand side” that calls the * would actually be treated as if it were on the right hand side.

Yes, it would. However, let’s say you have two matrices A and B, and you want to multiply them such that the transform of matrix B comes before the transform for matrix A. If these are column-major canonical matrices, you would use (AB). If these were row-major canonical matrices, you would reverse this, because row-major canonical matrices put the first transform on the left, not the right. So the row-major canonical equivalent that would give an equally meaningful result is (BA).

The row-major vs. column-major distinction is all about how you generate those matrices. How you create the initial matrices and how you concatenate them together. The math of the concatenation operation is the same either way.

For example, if A is on the left side of A*B and is laid out in memory as a 2x2 matrix with a linear array of A={1,2,3,4}, and it is row-major, then we are taking a dot product using 1,2 as the row. But if it is column-major, we are taking a dot product using 1,3 as the row.

OK, let’s look at matrix multiplication by a column-major canonical A and B:


A:
[ 1 3 ]
[ 2 4 ]
B:
[ 1 3 ]
[ 2 4 ]

Laid out in array-form:

C = A * B

C1 = (A1 * B1)+(A3 * B2)
C2 = (A1 * B3)+(A3 * B4)
C3 = (A2 * B1)+(A4 * B2)
C4 = (A2 * B3)+(A4 * B4)

So now let’s do it with row-major canonical:


A:
[ 1 2 ]
[ 3 4 ]
B:
[ 1 2 ]
[ 3 4 ]

Laid out in array-form:

C = A * B

C1 = (A1 * B1)+(A2 * B3)
C2 = (A3 * B1)+(A4 * B3)
C3 = (A1 * B2)+(A2 * B4)
C4 = (A3 * B2)+(A4 * B4)

Now, let's look at D = B * A:

D1 = (B1 * A1)+(B2 * A3)
D2 = (B3 * A1)+(B4 * A3)
D3 = (B1 * A2)+(B2 * A4)
D4 = (B3 * A2)+(B4 * B4)

D from the row-major version is mathematically equivalent to C from the column-major version (remember that D2 == C3 and D3 == C2). The same computations in the same sequence, leading to the same outputs.

I want to make sure I am visualizing your example properly. If we are dealing with column major, then C matrix should follow the layout of A and B and thus look like this:


[C1 C3]
[C2 C4]

But that doesn’t make sense based on the calculations for C1…C4 you’ve written. The formulas you’ve written result in:


[C1 C2]
[C3 C4]

Why did you have C = A * B provide C with a different layout compared to A and B? Shouldn’t your C calculations be transposed?

This is why it’s best to use a proper matrix library and not have to bother with the details. It’s easy to get things reversed by accident.

Fortunately, I did the computations backwards in all 3 cases consistently. So it ultimately comes out to the same answer: that column-major AB uses the same math as row-major BA. So the implementation of operator* doesn’t care about major ordering.

But for the sake of having the correct reasoning:

Column-major canonical:


A:
[ 1 3 ]
[ 2 4 ]
B:
[ 1 3 ]
[ 2 4 ]
 
Laid out in array-form:
 
C = A * B
 
C1 = (A1 * B1)+(A3 * B2)
C2 = (A2 * B1)+(A4 * B2)
C3 = (A1 * B3)+(A3 * B4)
C4 = (A2 * B3)+(A4 * B4)

Row-major canonical:


A:
[ 1 2 ]
[ 3 4 ]
B:
[ 1 2 ]
[ 3 4 ]
 
Laid out in array-form:
 
C = A * B
 
C1 = (A1 * B1)+(A2 * B3)
C2 = (A1 * B2)+(A2 * B4)
C3 = (A3 * B1)+(A4 * B3)
C4 = (A3 * B2)+(A4 * B4)
 
Now, let's look at D = B * A:
 
D1 = (B1 * A1)+(B2 * A3)
D2 = (B1 * A2)+(B2 * A4)
D3 = (B3 * A1)+(B4 * A3)
D4 = (B3 * A2)+(B4 * B4)