Need help on 2d & 3d compositing!

Im developing some presentation which is created with prerendered image and some 3d object. I want to have perfect 2d and 3d compositing. In pipeline we use Maya and mental ray. From Maya, we render image in 2 buffers:

  • rgba color (*.ct)
  • float32 depth (*.zt)

Now I have trouble to match depth values between depth buffer from *.zt file and opengl depth buffer. It seems that mental ray doesnt respect near and far camera settings, because some values goes out of this near-far range.

Im using following math to convert zt values to range 0…1, but it doesnt work properly:

mydepth = (1.0f - DepthNear/zt_file_depth) * DepthFar / (DepthFar-DepthNear);

Any toughts, suggestion, or hint how to solve this problem?

You could always setup an MRT system and write your own custom depth values into a MRT taraget. If you use RGBA32F, then you could experiment with 4 different variations of depth algorithm at the same time!

Yes… but actually I need to know how mental ray calculate per pixel depth.

just use the gluProject function
i suppose its possible that mental ray projects into the positive Z direction. Look out for that :slight_smile:

No… it seems that ignore near and far camera settings in maya…
For example, I create one plane which intersect near and far plane, and render image.

In app, I load that z-buffer and values are greater than far plane.
For example:
camera 5-60
zt file 5.24 - 68.85

Never mind… I fixed this issue on brutal way… exported whole scene (~8M tris) from maya, load in my app, render one frame, grab depth buffer and save to file… job done.