Different depth-values using the same settings

Hi,

i’m trying to build an out-of-core renderer in a linux cluster, where some render-nodes request objects from data-nodes.
These data-nodes use occlusion-culling to determine if the requested model is visible and if positive, they send the data over the network.

To perform the occlusion culling, the render-nodes send their depth-buffer (via glReadPixels()) over to the render-nodes each x frames. The data-nodes then write back the the buffer via glDrawPixels()

For debugging purpose, i rendered the scaled depth-buffer-values to a full-screen quad.

Now i noticed, the depth-values from the render-nodes are far more intense than the depth-values i get by directly rendering in the data-nodes. I haven’t acquired yet a factor, but i’d guess it’s at least 1/2 of the render-nodes’ intensity.
And i’m not referring to the values written back into the data-nodes. I mean values which stem from direct rendering the same objects as in the render-nodes.

Because of this weakness, many occlusion-tests fail.

Any ideas?

Right now, I’m using a single computer to emulate the other nodes. Different hardware seems to be out of the question, here.
I’m using Kubuntu Linux (32 and 64 all the same) with driver nVidia 180.44.

Answering my own post:
Found the solution myself.

Seems like I didn’t have equal settings in all node after all.
The nearPlanes were different (0.1 and 0.01) which caused a major difference in the DepthBuffer values (ie. 0.997854 vs. 0.9997854).

Cheers,
TheAvatar