I'm currently porting an application from DirectX to OpenGL, and the application uses of course the DirectX coordinate system (i.e. left-handed world, camera views down along +z, projection projects into 0..1 z Range.)
As a lot of code already depends on that, I need a "minimal-invasive" solution for the OpenGL backend. So far, I can capture all view/world/projection matrix setting calls.
I've changed the projection matrix to a left-handed one which however uses -1..1 as the depth range, and swapped the z-axis of the view matrix. I also set the transpose (I don't quite understand why I need to use the transpose flag, as I actually transpose all matrices for DX, as my runtime-storage is transposed w.r.t DX), which gives me some transformation, but it seems to be way off. I also mirror the world matrix at the YZ plane, to get into a right-handed coordinate system.
My understanding is that the view matrix and the projection matrix change should actually not make a difference if I keep using a view which looks down +z and a projection matrix which maps to -1..1 (!), as the result should be that the world is mirrored twice -- I would only have to change the depth test. For some reason or another though, I don't get this working.
What is the recommended way to translate from DX to OpenGL? I'd really like to do this as late as possible in the pipeline (ideally, when setting the shader parameters), but I can't quite figure out what the minimum set of required changes is. If there's some DX to OpenGL guide, I'd be really interested in that.