Another OpenGL (and Direct3D) matrices thread

Can some help me out with “column/row-major” matrices. It seems like either no one who comments on the subject can get the facts straight or something. Anyway I can still not figure this stuff out.

I would appreciate clarification on the following points I’ve stumbled across, and ultimately I’d like to get this (http://svn.swordofmoonlight.net/code/Somplayer.dll/Somvector.h) header straightened out. I think it is currently row-major but I’ve read stuff that would call it otherwise. I thought I was making it column major (as the opening comments suggest)

A) I always thought row/column was laid out differently in memory but I keep reading that it isn’t. If it isn’t I don’t understand how a transpose cannot result in the homogenous column/row not ending up in different places in memory. So why would you ever transpose to convert if so? Is it just for going between a 3rd convention not used by any major hardware API?

B) Is this purely a multiplication ordering difference? If so why is there so much discussion on the topic online?

C) I’ve read that people don’t like GLSL because they like to pass matrices as 3 vectors where I assumed the position components are in the w column. But if so that doesn’t seem to line up with Direct3D or OpenGL according to my present understanding. Does HLSL not line up with either? If so I’ve never noticed so the conversion must be automatic.

In conclusion I think I understand that everyone is converging on OpenGL’s conventions. If so should I just find/replace pre/postmultiply and the parameters?

Thanks :doh:

EDITED: incidentally I think a lot of my confusion is probably from assuming Direct3D uses a different memory layout. I have used libraries in the past that use a transposed memory layout and I think I assumed they did so because Direct3D is popular and it was using a D3D layout. But now I am guessing those libraries were using a 3rd layout. Maybe for SIMD or something like in C (above) reasons.

I really would like to figure out which convention conceptualizes the vector as a row as I think I would prefer to just stick with what is more C (language) friendly. As neither seems to follow textbooks.

Really, I can’t explain it any better than I did here. To summarize the long post:

1: Neither column-major ordering nor row-major ordering is more “more C (language) friendly” than the other. Indeed, the whole “major ordering” thing is only about storing a matrix in an array of floats. You use an array in both cases; the only question is where array index I is in terms of the X,Y positions in the matrix.

2: There are two aspects at play: the ordering of the elements in an array (ie: column vs. row-major), and the orientation of the matrix (canonical math style puts the translation on the right, transposed puts it on the bottom). People often call them “column matrices” or “row matrices”, based on where the basis vectors for the space are stored.

People always get these confused with major ordering. You can have column-major/canonical, row-major/canonical, column-major/transposed, and row-major/transposed.

3: Major-ordering does not change the mathematics of matrix multiplication. The same code you use to multiply two column-major/canonical matrices can be used for two row-major/canonical matrices.

4: Matrix operations only make sense between matrices that use the same ordering and orientation. Mathematically, you can do the operation, but geometrically, the operation is nonsense.

5: OpenGL uses column-major/canonical. D3D uses row-major/canonical.

6: Column-major/canonical is identical to row-major/transposed in terms of the order of elements. The same goes for row-major/canonical and column-major/transposed. (note: this is generally why people get them confused. Because the layout in memory doesn’t change from one conceptual interpretation to the other).

This is how passing row-major matrices works with the “transpose” parameter of glUniformMatrix. You pass a row-major/canonical matrix, which the driver transposes, turning it into a row-major/transposed matrix. This is identical to a column-major/canonical matrix.

ultimately I’d like to get this (http://svn.swordofmoonlight.net/code...ll/Somvector.h) header straightened out.

You couldn’t straighten that header out with a steam-roller. It casts an object of non-POD class type into a 2D array of scalars, which (unless you’re using a C++11 compiler) is very much not a legal operation. Because that was a lot easier than just defining a 2D array as a member of the type.

I would suggest avoiding it like the plague, just on the grounds of it being really terrible code.

Major-ordering usually refers to the storage of a 2D construct like a matrix in a 1D array of values. This matrix class uses a 2D array (or if it uses a 1D array to allocate the memory, it’s certainly not evident here). So the question is really which index is the columns and which is the rows.

Again, it’s impossible to tell due to lacking any information about the orientation of the matrix. However, assuming a canonical orientation, then the first index is definitely the column. I know this because of how [var]frustum[/var] puts the translational components in the 3rd component of the first index. So the first index is either the column of a canonical orientation, or a row of a transposed orientation.

Really, if you’re going to use a non-SSE-aware matrix library, just use GLM. And even they have SSE-based vectors and matrices. So you’re not gaining anything by using random garbage code you found online.

In as few words as possible:

column-major = columns stored together
row-major = rows stored together

operator-on-the-left math order = PVM * v[SUB]obj[/SUB] = v[SUB]clip[/SUB]
operator-on-the-right math order = v[SUB]obj [/SUB]* MVP = v[SUB]clip[/SUB]

OpenGL is column-major and uses operator-on-the-left math order.
C/C++ is row-major.

Just use row-major operator-on-the-right in C++, then you can pass the matrices directly into OpenGL without any transposing required. These two reversals cancel out, and you get the same values in sequential memory locations as with OpenGL’s column-major operator-on-the-left.

P.S. Re D3D, various net refs indicate it is row-major operator-on-the-right.

[QUOTE=Alfonse Reinheart;1243999]You couldn’t straighten that header out with a steam-roller. It casts an object of non-POD class type into a 2D array of scalars, which (unless you’re using a C++11 compiler) is very much not a legal operation. Because that was a lot easier than just defining a 2D array as a member of the type.

I would suggest avoiding it like the plague, just on the grounds of it being really terrible code.[/QUOTE]

First off this is my code I programmed from scratch not many months ago. If that’s not obvious. It works fine with MSVC. But if something is only legal in C++11 (can’t see why myself; why again would 2D be different?) then that’s fine as GCC would be the only other target. It’s a more flexible vector/matrix library than I’ve ever come across or programmed myself. It’s zero sized so if you want you can derive from it and name the components whatever you need to since the super class is generic.

I don’t understand myself why this kind of question amounts to so much text in response. I’ve been literally trying to sort this stuff out for years :slight_smile:

column-major = columns stored together
row-major = rows stored together

That was my thinking. But it seems like if Direct3D and OpenGL use identical layouts then this cannot be true. I mean basically in C with a 2D array, each row is a vector. The last row is the position, and the first three are the basis vectors. Is this wrong? I always thought that was how OpenGL worked, and Direct3D must do it by putting all x components in the first row and so on, but that’s NOT how Direct3D works. And I’ve seen evidence of that over the years but just wasn’t thinking about OpenGL at the time. Personally I can never remember which order is which and just have to do trial and error whenever I have to work with matrices for a week. But I think my notions are beginning to solidify…

Still I find all of the online discussion off base. Almost as if there is an attempt to confuse the readers :slight_smile:

Anyway. If anyone is willing to humor me. Can we at least agree that both libraries store a vector in each C row, where a row is the indexed by the first subscript when declaring and accessing the elements of a 2D array (and the columns of the rows with the second subscript) ? If so then storage wise there is no difference. So that seems like a misnomer. I think I’ve read before that it’s just a documentation / functional difference. I think the OP makes that clear.

PS: I will probably double post after I can digest everything posted so far. I just had to set the record straight on the code being knocked around above. Thanks.

EDITED: It could be that OpenGL uses the other layout. But I have two vector libraries I used to use with early OpenGL without any problems that definitely stored the position in the last 2D C array row and so on.

It’s a more flexible vector/matrix library than I’ve ever come across or programmed myself. It’s zero sized so if you want you can derive from it and name the components whatever you need to since the super class is generic.

I don’t really see how it’s particularly “flexible” though. It doesn’t define the storage, but it does require a derived class to do so (while also requiring that derived class to not use virtual functions, let it break C++11’s standard layout rules). And it requires the derived class to define the storage in a very specific way, such that it is layout-compatible with a 2D array of scalars.

Besides the fact that it can handle matrices bigger than 4x4, I don’t see what’s more flexible or generic about this class than GLM’s tmatNxM<T> classes.

Can we at least agree that both libraries store a vector in each C row, where a row is the indexed by the first subscript when declaring and accessing the elements of a 2D array (and the columns of the rows with the second subscript) ?

Define “a vector.” Technically, every matrix column or row is “a vector.” If you’re talking about a basis vector, OpenGL’s vector functions will assume that every X values of the given 1D array (where X is the matrix row count) is a column. Therefore, if every X values of the array is a basis vector, then you are providing OpenGL column-major, canonical ordered matrices. So every “C row” by your definition (who says the first index in C is a row?) is a column and therefore a basis vector.

From what I can tell from this page, D3D appears to use row-major, transposed matrices. So it has the same effective memory layout, but only because they reverse both of the conventions.

[QUOTE=michagl;1244084]

That was my thinking. But it seems like if Direct3D and OpenGL use identical layouts then this cannot be true.[/QUOTE]

As both Alfonse and I have said, column-major/operator-on-the-left (OpenGL) and row-major/operator-on-the-right (Direct3D) result in the exact same values in the exact same memory locations.

Jot down an example with a 2x2 matrix and prove it to yourself.

@Dark Photon^,

I agree (unless I am wrong) as I had hoped the OP would make clear. I was kind of just looking for a Yes/No down the list of my A) B) C) points in the OP.

But I am also pretty shaky on what layouts GLSL and HLSL shaders are using.

@Alfonse: the C++ docs in MSVC specify that the first subscript in a 2D array is a row, and each element of the row is a column. If you try to specify an array like a 2D image of pixels where the scanlines are back to back in memory and the horizontal resolution is the first subscript things won’t work out correctly. So you specify the 2D array with the height in the first subscript and the width in the second.

I’ve always worked with OpenGL where the columns are elements in 3 basis and 1 point vector. So if you say each C row is a column then that is not C++ friendly, plus it doesn’t explain how the point vector can be in contiguous memory. I derived most of that source code from a graduate level linear algebra book, but I don’t understand the stuff well enough to say whether things will just work with the basis vectors transposed in memory, but I am pretty sure the point vector could not also be so lest it was unpacked across the 4th column. Many libraries use that layout, perhaps shaders do, SSE does I believe, but I am pretty sure OpenGL does not. Still I wouldn’t bet the farm just because I’ve been wrong about such things enough times in my life and am not so neurotic about being right.

Offtopic: as for the vector library. It’s all defined within 1 class, rather than a separate class for each dimension. Makes it highly maintainable and consistent. It allows access to sub ranges. Easily supports switching between homogeneous and non-homogeneous logic. The templates catch any funny business at compile time. It safely maps to aggregate arrays and is self-documenting in a good way. And I have a virtual class that derives from it (Sominstant.h) MSVC sticks the base class behind the vtable just fine (edited: technically the virtual class derives from a union-like class that derives from the matrix class)

It’s a little non-idiomatic especially with respect to being zero-sized but IMO it’s good to try new things all of the time with C++. If everyone sticks with some guru’s way of doing things that’s a dead end in terms of diversity and leads to rigidity in thinking. I would never let STL out of a definition file, and am currently interested in C++ patterns that look more like web-development but with C++'s static typed compile time optimization bent. I’ve always stuck to highly literate and long lived maintenance centric design. I find every C++ project tends to work better with a coding style tailored to the scope and goals of the project. Anyway this vector library was a pleasant surprise. I was surprised everything was able to come together. It works and about the only real critique you can level at it is it may not optimize well, but I don’t believe in second guessing compilers, and can live with a poorly optimizing compiler if necessary either way.

PS: Full disclosure, I had just about already decided before posting to make the gist of the header follow Direct3D conventions once I determined the vectors were still packed together. Thankfully it is more or less already following Direct3D conventions. Just the projection matrix needs changing I think. Even though I totally swapped it around (from what was probably OpenGL style; because I wanted OpenGL style!) not far back because I read some stuff in a forum which I am now nearly positive was bad information but I could not find anything to corroborate it.

That said it seems like there is a lot of bad information on this subject online. I wish all of the threads like this could be wished away and there could just be one concise and plainly worded article with diagrams for reference somewhere authoritative :slight_smile:

Either way I will try to write up such an explanation here after I get everything sorted out myself.

But I am also pretty shaky on what layouts GLSL and HLSL shaders are using.

What do you mean by that? Dark Photon just gave you the answer.

Internally, the shader can do whatever it wants. But the code to upload the matrices (ie: glUniformMatrix) by default uses column-major canonical ordering. Which is layout-identical to D3D’s row-major transposed. So I’m not sure what you’re asking, since they’re uploading the exact same data.

If you try to specify an array like a 2D image of pixels where the scanlines are back to back in memory and the horizontal resolution is the first subscript things won’t work out correctly.

Of course that won’t work out correctly; scanline memory order defines a major ordering. It builds it into the definition: each horizontal scanline is in contiguous memory. But there’s no rule that says you have to have horizontal scanlines be contiguous; you could have vertical lines be contiguous instead.

The preference is based on the fact that software and hardware are all designed to expect horizontal contiguity. And that’s primarily because CRTs work on horizontal scanlines. It’s a convention imposed by APIs and file formats; there’s nothing in C or C++ that forces the first subscript to be considered the rows or the columns. It’s all down to convention.

I’ve always worked with OpenGL where the columns are elements in 3 basis and 1 point vector. So if you say each C row is a column then that is not C++ friendly, plus it doesn’t explain how the point vector can be in contiguous memory.

Your definition of “C++ friendly” is based on the assumption that the contiguous element of a 2D array are the rows. This is not required by C or C++. And it is this assumption that prevents you from understanding why the basis vectors are contiguous.

Column-major canonical order says that the contiguous elements are columns. OpenGL uses column-major canonical order. Therefore, the columns of the matrices are in contiguous memory. And since the columns are the basis vectors in a canonical matrix, the basis vectors are in contiguous memory.

Therefore, your “C row” is interpreted by OpenGL as a column. Because that’s a perfectly legitimate interpretation. Just as it is perfectly legitimate to interpret a “C row” as a row of a transposed matrix.

I wish all of the threads like this could be wished away and there could just be one concise and plainly worded article with diagrams for reference somewhere authoritative

There is one. I posted a link to it. It even has diagrams and everything.

Hi again,

I apologize that I can’t post with more frequency.

I finally found a moment to dig into Alfonse’s initial post minus the bit that skewers my vector library :tired:

Just wanted to try to get some clarity…

5/6) Did we find out that D3D uses a row-major/transposed instead of of canonical?

If glUniformMatrix converts to row-major/transposed, does that mean that the shader registers (and GLSL?) work with row-major transposed instead of column-major?

Again, it’s impossible to tell due to lacking any information about the orientation of the matrix. However, assuming a canonical orientation, then the first index is definitely the column. I know this because of how frustum puts the translational components in the 3rd component of the first index. So the first index is either the column of a canonical orientation, or a row of a transposed orientation.

By 3rd do you mean 4th but with 0 based instead of 1 based numbering? I am pretty sure the basis and point vectors are contiguous in memory with both APIs. The way I am reading this it sounds like you are suggesting OpenGL matrices are stored xxxxyyyyzzzzwwww (where x is the components) in memory. That’s not been my experience. Can someone confirm/deny this?

EDITED: I’m definitely going to read over this (mathematics - Can someone explain the (reasons for the) implications of colum vs row major in multiplication/concatenation? - Game Development Stack Exchange) ASAP. I guess it’s the link from the quoted post

5/6) Did we find out that D3D uses a row-major/transposed instead of of canonical?

Yes we did. That was an error on my part.

If glUniformMatrix converts to row-major/transposed, does that mean that the shader registers (and GLSL?) work with row-major transposed instead of column-major?

Read the entire paragraph. When you pass GL_TRUE to transpose, that means you’re passing in row-major matrices. You pass a row-major/canonical matrix. It transposes it, thus converting the array into row-major/transposed. And since that’s identical to column-major/canonical, it can now upload that data to the actual shader as if you had pass column-major/canonical.

It means nothing about what the “shader registers” do. It’s all about what the specification says. It says that you are to pass column-major/canonical matrices, but you can pass row-major/canonical if you set transpose to GL_TRUE. If you follow this rule, OpenGL will interpret those matrices correctly, and matrix operations in the shader will operate as expected.

The way I am reading this it sounds like you are suggesting OpenGL matrices are stored xxxxyyyyzzzzwwww (where x is the components) in memory. That’s not been my experience. Can someone confirm/deny this?

Dark Photon and I have been telling you this for five posts now. The OpenGL 4.3 compatibility profile, folio page 432, very clearly and explicitly lays out what ordering it expects for the array of floats you pass in to MultMatrix and LoadMatrix. It has a diagram and everything. GLM uses this same ordering, which you can verify by looking at the output of type_ptr. You can create matrices in a compatibility profile and fetch the array of data to verify this.

If your experience has been something else, then all I can say is that you have dramatically misinterpreted your experience.

Ok but you said converts to “row-major transposed” so that sounds like the target layout is “row-major transposed”. In shaders as you well know the registers are 4D and a 4x4 matrix is 4 back to back, so you can plainly talk about them as memory just as you would expect to find in a hexeditor.

Dark Photon and I have been telling you this for five posts now. The OpenGL 4.3 compatibility profile, folio page 432, very clearly and explicitly lays out what ordering it expects for the array of floats you pass in to MultMatrix and LoadMatrix. It has a diagram and everything. GLM uses this same ordering, which you can verify by looking at the output of type_ptr. You can create matrices in a compatibility profile and fetch the array of data to verify this.

Can you link to that? A search turns up www.[b]opengl[/b].org/registry/doc/glspec43.core.20120806.pdf but I’m not finding anything on 432. Nvm; Bing does a better job with this query. Shouldn’t have used Firefox’s default anyway :doh:

Will dig into this later:whistle:

If your experience has been something else, then all I can say is that you have dramatically misinterpreted your experience.

Well if the memory layout is indeed xxxxyyyyzzzzwwww in memory then I admit that I have a reluctance to accept that because I worked exclusively with OpenGL for years and never fed matrices that looked like that to glLoadMatrix. But now it seems like you are gaslighting me, because I thought we agreed (like for for 5 posts) that the memory layout is identical to D3D (aside from what is a row and what is a column when summing the dot products) and D3D most definitely does not work that way.

If I am to read your assertion correctly then there would have to be a way to pass an alternate layout to glLoadMatrix and still wind up with visually correct results.

EDITED: I did some searching through old files and I can confirm now that glMultMatrixf with an xyzwxyzwxyzwxyzw memory layout will work. If we are disagreeing on that point, then let it be known that it will work.

C Ex. struct{ float u[4], v[4], n[4], o[4]; }; //o holds translation in o[0], o[1], o[2]

Though I suppose its possible the matrices are pre-transposed… will take more digging.

EDITED: Nothing super conclusive, but I did turn up this:

        xform[12] = -eye.dot(x.u);
        xform[13] = -eye.dot(x.v);
        xform[14] = -eye.dot(x.n);
                    
        xform.transpose3x3();

        glMatrixMode(GL_MODELVIEW);

        glLoadMatrixf(xform);    

The transpose I’m assuming is just part of an inversion trick. Clearly 12, 13, and 14 are loaded with translation values and sent directly to glLoadMatrix. Is that weird? Or are we in agreement but speaking past each other???

EDITED: checked to be sure the operator was not overloaded.

^I suppose in retrospect it really doesn’t matter with glLoadMatrix as long as there is consistency :doh:

That may be the crux of my memories:o

Seems like OpenGL would’ve been better off to just omit glFrustum (and what else if anything?) if that is indeed the rub.

If so…

operator-on-the-left math order = PVM * v[SUB]obj[/SUB] = v[SUB]clip[/SUB]
operator-on-the-right math order = v[SUB]obj [/SUB]* MVP = v[SUB]clip[/SUB]

Is operator on the right notationally backwards? Because to me it definitely makes more sense for projection to be the last step. And that order tends to be more readable.

Ok but you said converts to “row-major transposed” so that sounds like the target layout is “row-major transposed”. In shaders as you well know the registers are 4D and a 4x4 matrix is 4 back to back, so you can plainly talk about them as memory just as you would expect to find in a hexeditor.

You don’t seem to be understanding how this whole “specification” thing works.

The OpenGL specification defines apparent behavior. The glUniformMatrix* commands say, “If you provide matrices in column-major/canonical ordering, then I will upload them to matrix uniform storage in a way that will work in accord with GLSL. If you provide matrices in row-major/canonical ordering and set the transpose field to GL_TRUE, then I will upload them to matrix uniform storage in a way that will work in accord with GLSL.”

That is all the OpenGL specification says. If a particular GLSL implementation wants to use row-major/canonical ordering in the shader, then it will forcibly transpose everything you give it before uploading (effectively inverting the transpose field). But this is all transparent to you, because GLSL requires that mat4x4[0] to be the first column vector. So that implementation will have to build a column vector from 4 row vectors and return it when you take a mat4x4 and index it.

This is all an implementation detail that the user is not privy to. It is not possible to say what “you would expect to find in a hexeditor.”

Now yes, odds are good that the GLSL implementations will store the basis vectors as the vectors of the matrix. Odds are good that mat4x4[0] will not generate a column vector from 4 row vectors, but will just return a column vector because that’s what the matrix stores. But you don’t know that it will. Nor does OpenGL or GLSL provide any means to be certain that it will.

I did some searching through old files and I can confirm now that glMultMatrixf with an xyzwxyzwxyzwxyzw memory layout will work. If we are disagreeing on that point, then let it be known that it will work.

I see no evidence of this. How do you define “work”? What values are you using? What are you rendering, what do you see before, and what do you see after?

And most important of all, why do you insist on using confusing constructs like [var]struct{ float u[4], v[4], n[4], o[4]; };[/var] instead of a simple [var]float[16];[/var] like everyone else? I don’t know what [var]xform[/var] is, so I can’t say what your supposed code is even doing.

If you’re going to test the ordering of something, use a 16-element floating point array, with no adornment from any matrix library or other things. Feed values into the array directly and pass it to OpenGL. Post the entire code example, top to bottom. And use code tags when you do.

Or better yet, why don’t you call [var]glTranslate[/var] and then use [var]glGetFloatv[/var] to pull the matrix you just generated out of OpenGL and let it tell you what the order is.

Clearly 12, 13, and 14 are loaded with translation values and sent directly to glLoadMatrix. Is that weird? Or are we in agreement but speaking past each other???

Is what weird? The last four values should be the last column. You know, as in column-major/canonical ordering. Or row-major/transposed ordering. Or “xxxxyyyyzzzzwwww (where x is the components) in memory”. Or any other way of saying that your “it will work” from before makes no sense.

^I think this kind of nonsense speak may be why these kinds of threads are always so stupifying. I’m afraid I am going to have to give up at this point.

Some of this is fragrantly unhelpful. So I won’t gum up the thread anymore. Maybe Dark Photon can clarify xxxxyyyyzzzzwwww vs xyzwxyzwxyzwxywz with respect to OpenGL since we can’t seem to agree that memory addresses or C structs for that matter are linear in nature :doh:

Why not call glTranslate? Because it would be more work to setup a test application than its worth right now. Besides I do have a few resources to gawk at to supplant my chatting in forums now. I will get it straightened out one way or another.

My take away (from the words here alone) seems like there is a 60% possibility I had OpenGL all wrong when posting the OP, and 40% I’m just being f’ed with for someone’s dark amusement.

But I do now remember at some point in my youth deciding to flip the indices all around in the matrix library I was using. I think there is a good chance I decided then to buck the OpenGL way. And that may well not be compatible with glTranslate and friends as I would not likely have been using them for anything 3D anyway much less mixing them with glLoadMatrix. I know SSE uses the xxxxyyyyzzzzwwww layout so odds are good I reckon that OpenGL does as well.

EDITED: Maybe SSE swings both ways (fhtr: 4x4 float matrix multiplication using SSE intrinsics) but I remember an Intel presentation that explained the way to transform vertices en masse was to store them in memory so that the x components were all back to back and so on… I don’t know if that is what the “streaming” bit means or not. I can’t remember what application (optimization) I was trying to find for SSE, but whatever it was it didn’t pan out.

The last time I put down regular 3D programming “Longs Peak” was what everyone was talking about. I’m relieved to learn this afternoon that that didn’t pan out… at least because it means I can jump back into desktop OpenGL without having to research into anything too new:cool: