I had an engine up and running in managed directx, that using direct x’s matrix classes had rotate, move, translate functionality. I’ve started a new engine in Opengl and wish to retain this functionality but doing so is proving…impossible.
I’ve tried copying the matrix into a float array, and using glLoadMatrix to use the matrix. I’ve tried flipping the matrix, reversing order, but always with the black screen.
When I simply used glLoadIdentity,glRotateF(0,1,0,0) etc to put it in the same place, it worked. A white screen as the camera was inside a cube.
Here’s the C# code I use to convert matrixes. Can you see where I’m going wrong here?
Matrix mat = Matrix.Invert(world);
SyncGL2(mat);
Gl.glLoadMatrixf(vmat);
entity.SyncGL2(entity.WorldMatrix());
Gl.glMultMatrixf(entity.GetGLMat());
entity.RenderNative();
In english, I first load the converted camera matrix into gl, then convert the object matrix into gl and then multiply that with the current (Camera) matrix.
I’m working on an app that does the opposite right now, takes an OpenGL style matrix and creates a Dx9 matrix from it. I’m using C++ and was able to copy my array directly into the D3DMATRIX.m array and it appeared to work like it should. Not sure if the Managed C# code exposes that or not, but this doesn’t seem quite right to me:
For this app I’ve been using my own custom Matrix class and I haven’t had to do any transposing of it when I send it to D3D or OpenGL, and I get the proper transformations with each.
I really know nothing about DX. You’re surely right Deiussum but maybe the difference might come from the D3DMatrix structure itself: maybe the array is ordered like GL but all the data in the struct (M11, M42…) are not aligned the same way… This might be awful, but to know just simply ouput the struct members to see.
So my above example should be reading the struct float by float as it is layed out in memory, which should produce the right result. Unless the C# wrapper messes with the order, but that doesn’t seem too likely.
The red book is showing the matrices in the way they are usually represented in Linear Algebra books, not the way they are represented in memory. Look closely at the OpenGL portion of code I posted above, and try it yourself.
If you want to specify explicitly a particular matrix to be loaded as the current matrix, use
glLoadMatrix*(). Similarly, use glMultMatrix*() to multiply the current matrix by the matrix passed in
as an argument. The argument for both these commands is a vector of sixteen values (m1, m2, … , m16)
that specifies a matrix M as follows:
The two basic commands for affecting the current matrix are
void LoadMatrix{fd}( T m[16] );
void MultMatrix{fd}( T m[16] );
LoadMatrix takes a pointer to a 4 × 4 matrix stored in column-major order as 16
consecutive floating-point values, i.e. as
0BB@
a1 a5 a9 a13
a2 a6 a10 a14
a3 a7 a11 a15
a4 a8 a12 a16
1CCA
.
(This differs from the standard row-major C ordering for matrix elements. If the
standard ordering is used, all of the subsequent transformation equations are transposed,
and the columns representing vectors become rows.)
Again, read my code. I am essentially dumping it as