Im developing some presentation which is created with prerendered image and some 3d object. I want to have perfect 2d and 3d compositing. In pipeline we use Maya and mental ray. From Maya, we render image in 2 buffers:
rgba color (*.ct)
float32 depth (*.zt)
Now I have trouble to match depth values between depth buffer from *.zt file and opengl depth buffer. It seems that mental ray doesnt respect near and far camera settings, because some values goes out of this near-far range.
Im using following math to convert zt values to range 0…1, but it doesnt work properly:
You could always setup an MRT system and write your own custom depth values into a MRT taraget. If you use RGBA32F, then you could experiment with 4 different variations of depth algorithm at the same time!
Never mind… I fixed this issue on brutal way… exported whole scene (~8M tris) from maya, load in my app, render one frame, grab depth buffer and save to file… job done.