Converting a DirectX Matrix to OpenGL. Not possible?

Hi,

I had an engine up and running in managed directx, that using direct x’s matrix classes had rotate, move, translate functionality. I’ve started a new engine in Opengl and wish to retain this functionality but doing so is proving…impossible.

I’ve tried copying the matrix into a float array, and using glLoadMatrix to use the matrix. I’ve tried flipping the matrix, reversing order, but always with the black screen.
When I simply used glLoadIdentity,glRotateF(0,1,0,0) etc to put it in the same place, it worked. A white screen as the camera was inside a cube.

Here’s the C# code I use to convert matrixes. Can you see where I’m going wrong here?

public void SyncGL2(Matrix mat)
        {
            vmat[0] = mat.M41;
            vmat[1] = mat.M31;
            vmat[2] = mat.M21;
            vmat[3] = mat.M11;
            vmat[4] = mat.M42;
            vmat[5] = mat.M32;
            vmat[6] = mat.M22;
            vmat[7] = mat.M12;
            vmat[8] = mat.M43;
            vmat[9] = mat.M33;
            vmat[10] = mat.M23;
            vmat[11] = mat.M13;
            vmat[12] = mat.M44;
            vmat[13] = mat.M34;
            vmat[14] = mat.M24;
            vmat[15] = mat.M14;
        }

Mat is a directX object.

Here’s the rendering code I use.

 Matrix mat = Matrix.Invert(world);
           SyncGL2(mat);
            Gl.glLoadMatrixf(vmat);
            entity.SyncGL2(entity.WorldMatrix());
            Gl.glMultMatrixf(entity.GetGLMat());
           
            entity.RenderNative();

In english, I first load the converted camera matrix into gl, then convert the object matrix into gl and then multiply that with the current (Camera) matrix.

I’m working on an app that does the opposite right now, takes an OpenGL style matrix and creates a Dx9 matrix from it. I’m using C++ and was able to copy my array directly into the D3DMATRIX.m array and it appeared to work like it should. Not sure if the Managed C# code exposes that or not, but this doesn’t seem quite right to me:

public void SyncGL2(Matrix mat)
        {
            vmat[0] = mat.M41;
            vmat[1] = mat.M31;
            vmat[2] = mat.M21;
            vmat[3] = mat.M11;
            vmat[4] = mat.M42;
            vmat[5] = mat.M32;
            vmat[6] = mat.M22;
            vmat[7] = mat.M12;
            vmat[8] = mat.M43;
            vmat[9] = mat.M33;
            vmat[10] = mat.M23;
            vmat[11] = mat.M13;
            vmat[12] = mat.M44;
            vmat[13] = mat.M34;
            vmat[14] = mat.M24;
            vmat[15] = mat.M14;
        }

It seems that it should be more like so:

public void SyncGL2(Matrix mat)
        {
            vmat[0] = mat.M11;
            vmat[1] = mat.M12;
            vmat[2] = mat.M13;
            vmat[3] = mat.M14;
            vmat[4] = mat.M21;
            vmat[5] = mat.M22;
            vmat[6] = mat.M23;
            vmat[7] = mat.M24;
            vmat[8] = mat.M31;
            vmat[9] = mat.M32;
            vmat[10] = mat.M33;
            vmat[11] = mat.M34;
            vmat[12] = mat.M41;
            vmat[13] = mat.M42;
            vmat[14] = mat.M43;
            vmat[15] = mat.M44;
        }

DirectX uses row vectors and OpenGL-column vectors so to convert matrices between them you should transpose(change rows with columns) needed matrix.

I should try the following

public void SyncGL2(Matrix mat)
{
vmat[0] = mat.M11;
vmat[1] = mat.M21;
vmat[2] = mat.M31;
vmat[3] = mat.M41;
vmat[4] = mat.M12;
vmat[5] = mat.M22;
vmat[6] = mat.M32;
vmat[7] = mat.M42;
vmat[8] = mat.M13;
vmat[9] = mat.M23;
vmat[10] = mat.M33;
vmat[11] = mat.M43;
vmat[12] = mat.M14;
vmat[13] = mat.M24;
vmat[14] = mat.M34;
vmat[15] = mat.M44;
}

You sure about that? I added the following functions to an app I’m working on that uses both D3D and OpenGL

    void DumpMatrix(float *pfv)
    {
        for (int c=0;c<16;c++)
        {
            std::cout << pfv[c] << "	";
            if (c % 4 == 3)
                std::cout << std::endl;
        }
    }

    void D3DMatrixTest()
    {
        D3DXMATRIX mx;

        D3DXMatrixTranslation(&mx, 10, 20, 30);

        DumpMatrix(&mx.m[0][0]);
    }

    void GLMatrixTest()
    {
        float fv[16];

        glMatrixMode(GL_MODELVIEW);
        glPushMatrix();
        glLoadIdentity();
        glTranslatef(10, 20, 30);
        glGetFloatv(GL_MODELVIEW_MATRIX, fv);
        DumpMatrix(fv);
        glPopMatrix();
    }

    void TestFunction()
    {
        std::cout << "D3D Matrix" << std::endl;
        D3DMatrixTest();
        std::cout << std::endl << "GL Matrix" << std::endl;
        GLMatrixTest();

    }

Upon calling TestFunction, I get the following results:

D3D Matrix
1       0       0       0
0       1       0       0
0       0       1       0
10      20      30      1

GL Matrix
1       0       0       0
0       1       0       0
0       0       1       0
10      20      30      1

For this app I’ve been using my own custom Matrix class and I haven’t had to do any transposing of it when I send it to D3D or OpenGL, and I get the proper transformations with each.

:wink:

I really know nothing about DX. You’re surely right Deiussum but maybe the difference might come from the D3DMatrix structure itself: maybe the array is ordered like GL but all the data in the struct (M11, M42…) are not aligned the same way… This might be awful, but to know just simply ouput the struct members to see.

I did actually take a look at the struct before posting my first suggestion. It is:

typedef struct _D3DMATRIX {
    union {
        struct {
            float        _11, _12, _13, _14;
            float        _21, _22, _23, _24;
            float        _31, _32, _33, _34;
            float        _41, _42, _43, _44;

        };
        float m[4][4];
    };
} D3DMATRIX;

So my above example should be reading the struct float by float as it is layed out in memory, which should produce the right result. Unless the C# wrapper messes with the order, but that doesn’t seem too likely.

D3D Matrix
1 0 0 0
0 1 0 0
0 0 1 0
10 20 30 1

Correct translation GL Matrix is
1 0 0 10
0 1 0 20
0 0 1 30
0 0 0 1

See Appendix F of OpenGL RedBook
First 3 elements of the last row define perspective in GL.

The red book is showing the matrices in the way they are usually represented in Linear Algebra books, not the way they are represented in memory. Look closely at the OpenGL portion of code I posted above, and try it yourself.

That`s from the RedBook

If you want to specify explicitly a particular matrix to be loaded as the current matrix, use
glLoadMatrix*(). Similarly, use glMultMatrix*() to multiply the current matrix by the matrix passed in
as an argument. The argument for both these commands is a vector of sixteen values (m1, m2, … , m16)
that specifies a matrix M as follows:

m1 m5 m9 m13
m2 m6 m10 m14
m3 m7 m11 m15
m4 m8 m12 m16

Exactly, and that is layed out in memory as

The two basic commands for affecting the current matrix are
void LoadMatrix{fd}( T m[16] );
void MultMatrix{fd}( T m[16] );
LoadMatrix takes a pointer to a 4 × 4 matrix stored in column-major order as 16
consecutive floating-point values, i.e. as
0BB@
a1 a5 a9 a13
a2 a6 a10 a14
a3 a7 a11 a15
a4 a8 a12 a16
1CCA
.
(This differs from the standard row-major C ordering for matrix elements. If the
standard ordering is used, all of the subsequent transformation equations are transposed,
and the columns representing vectors become rows.
)

Again, read my code. I am essentially dumping it as

pfv[0]  pfv[1]  pfv[2]  pfv[3]
pfv[4]  pfv[5]  pfv[6]  pfv[7]
pfv[8]  pfv[9]  pfv[10] pfv[11]
pfv[12] pfv[13] pfv[14] pfv[15]

And it was EXACTLY THE SAME for both OpenGL AND Direct3D…