Depth Buffer vs gluUnProject

I realize this topic has been exhausted. I searched each of the Developer forums for other instances of this topic. Each one seems to give different information, so I am looking for a conclusive answer. I am converting mouse coordinates to world coordinates using code like this:

    float Z;
    glReadPixels(X, Y, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &Z);
    gluUnProject(X, Y, Z, MV, PR, VP, &WX, &WY, &WZ);

This works on some systems, but not on others. The value of Z differs with the platform. This difference is by many orders of magnitude, which results in entirely different world coordinates. So here are the questions:

  1. Is the differing Z value caused by a driver bug, or was it intentionally left to the GL vendor?

  2. Given the same application and mouse coordinates, can I expect comparable Z values to be read from the depth buffer? I realize they will not be “exactly” the same due to floating point arithmetic.

  3. Since it is inconsistent, what else can be done to get the correct depth value?

Any help is greatly appreciated.

Do a ray-trace against the world in your collision/physics system.

Z-value results from the gluUnProject can be vastly different even on the same implementation. Just choosing different depth buffer bit counts in the pixelformat (e.g. 16 bits instead of 24) will result in such differences.

Yes, analytic intersection calculation with rays in your app will lead to more consistent results among platforms.

In this case the variance is in the result of a glReadPixels. I know colors aren’t going to be an exact match between platforms, but they will at least be close. Is there any place in the spec that talks about how depth values will be generated? They should at least be similar. I get 0.5 on one platform, and 0.0000034 on another. If that kind of variance existed for colors, nobody would use GL.

Is there any place in the spec that talks about how depth values will be generated
Yes, in the OpenGL 2.1 spec chapter 2.11 Coordinate Transformations.

They should at least be similar. I get 0.5 on one platform, and 0.0000034 on another. If that kind of variance existed for colors, nobody would use GL.
That could be a bug in the glReadPixels function. I have seen implementations which didn’t scale correctly when reading back depth as GL_FLOAT.
Try newer drivers or contact the vendor with a bugreport.