Transpose

What is the aim of passing the transpose of a transformation matrix to OpenGL instead of passing it in tis row-major order?
Does this have to do with performance when a matrix is stored or accessed in col-major wise?
Thanks.

All is explained here: http://oss.sgi.com/projects/ogl-sample/registry/ARB/transpose_matrix.txt

Update: Oops, misread the question. I guess it could be for performance reasons. shrugs

[This message has been edited by SnowKrash (edited 04-01-2002).]

because you like to?
if you have code not for gl and have the matrices there the other way around, you don’t need to transpose them manually all the time, but just using this function (the func can do it faster or at least with your speed…)

dx uses the other way around, too, i think…

I think I should rewrite my question to be more clear.
I looked at how Mesa3D handles matrices, as supposed to be similar to OpenGL, and I found that it replaces the current top matrix by the one passed by the user for the function call glLoadMatrix (m)
When glMultMatirx (m) is called, the implementation takes the transpose of the current matrix and multiplies it by the supplied matrix, and transposes the result back.
What is the point of storing the matrices into matrix stack transposed (transpose of the actual transformation matrix)?
And if it is a performance issue, where does it come form?

OpenGL was long ago derived from SGI’s propietary graphics library ‘Irix GL’
In its docs, matrix ops were specified as operating on row vectors on the matrix left.
OpenGL moved documenting the math as column vectors on the matrix right.
In order to facilitate easier porting from Irix GL to OpenGL, the matrices in OpenGL in memory were spec’ed the reverse to your C intuition
This way you could keep the old ‘Irix GL matrix in memory’ as is, use it in OpenGL, and the math spec would remain correct. (operation of matrix applied to row vector on its left is equivalent to the transpose of the matrix operating on the same vector as column on its right)

I don’t remember exactly where I heard this (probably the nvidia spec on the transpose extension). According to nvidia the opengl driver should be able to load a matrix and transpose it at the same time. Its all about the way you load it. The transpose should be almost free. Because a matrix is only 16 floats anyway it just packs nicely into a cache line ( and then some). So for the driver to transpose while sending to the card is a freebie.

Happy Coding.

All the driver has to do, is get a pointer to your array (glLoadMatrix) then it “picks” the values one by one and sends them in the correct opengl order so it would cost nothing.

I beleive documents about this row major / column major story can be found at sgi.com
As I rememeber it, it had to do with math, or …the engineers liked it this way…

V-man

its just two different standarts and you can’t get rid of it that simply…

like the electic direction… well known from + to - pole… but the electrons take the other way round from - to +… now we have the two… the physical and the electronical direction… lalala

Originally posted by WM:
What is the aim of passing the transpose of a transformation matrix to OpenGL instead of passing it in tis row-major order?
Does this have to do with performance when a matrix is stored or accessed in col-major wise?
Thanks.

In OpenGL, the order of matrix elements in memory is different from the standard order in C. The transpose is a convenient way to reorder the data for OpenGL. You should ignore this transpose because it is really reordering the data, not transposing the matrix.