Hi,
I’m trying to implement a cross-platform shadowmap DirectX10/OpenGL.
When I’m fetching the same depth texture within the same conditions I get Z values near to 0.5f for objects near to the camera with OpenGL while they are closed to 0.1f with DirectX version. I have verified all other computations. All results are identical but the value returned by the texture lookup. (Of course I’ve verified the computed UV is correct as the y coordinate for a render target is not the same for Dx10 and OGL)
For information, my Z-texture is a 32 bits one, I’ve ensured the GL_TEXTURE_COMPARE_MODE is set to GL_NONE and tried to set the GL_DEPTH_TEXTURE_MODE to GL_INTENSITY. But now I really don’t know what could give me these bad values.
Yes, projection matrices, viewports etc… are the same.
I’ve noticed that with OpenGL all values seems to be between 0.5f and 1.f instead of [0.f-1.f].
So I’ve tested in the shader a value = (value-0.5f) * 2.f; just to see… And then it works correctly alike DirectX version.
This sounds like a big workaround-hack, but I really cannot explain why my range is wrong (there is no glDepthRange restriction).
Yes, projection matrices, viewports etc… are the same.
Your projection matrices should not be the same. D3D has a different definition of “clip-space” from OpenGL’s. So your projection matrix needs to be specific to the API in question. It’s a fairly simple matter, but an important one.
Hum…that’s strange as I never noticed any other cross-platform rendering problem while keeping the same projection matrices…
Could you explain me the differences of clip-space interpretation please ?
You were right, OpenGL normalized device coordinates after perspective divide are between -1 and 1 on Z-axis while DirectX ones are between 0 and 1. So I have to change the projection matrix according this.