Hits and Depth Question (JOGL/OpenGL)

Hi! I’m not only a new forum member, but also a new OpenGL programmer.

I’m writing a Java/JOGL application which has selecting and picking functionality, and I’m finding that, when processing hits, all of the hit objects are coming back with zero minDepth and maxDepth.

When retrieving the minDepth and maxDepth from the select buffer, I’m getting depthInt = -2147483648, which converts to a float value of 0.0 when using this algorithm:

depthFloat = 1f + (float)((long)depthInt)/0x7fffffff

My understanding is that, if minDepth is 0.0, then the hit object is thought to be at the near clipping plane, and if minDepth is 1.0, then the object is thought to be at the far clipping plane.

My first thought was that perhaps the camera’s frustum spans too much Z space, resulting in lower-than-needed precision when OpenGL tries to determine their depths. This doesn’t seem to be a problem, though, because:

  • the meshes that I’m displaying have been glScalefed to fit within a 4x4x4 cube
  • the meshes all reside within the camera’s frustum
  • the camera’s frustum has a near clipping plane distance of 1.5, and a far clipping plane distance of 20.0.

The problem seems to occur regardless of the number of meshes that I display.

Does anyone have any suggestions about what I might be doing incorrectly?

Thanks in advance!

I don’t use JOGL so this may be wrong. What code are you using to read from the depth buffer? Are you passing the correct type into glReadPixels? Note that you can read floats directly and don’t have to convert from int to float.
http://opengl.org/documentation/specs/man_pages/hardcopy/GL/html/gl/readpixels.html

@Budric: Zed G. is using Selction Modus … this delivers 2 Depth value.

Zed Gimbal: I do not understand what ist your Problem.
two Hints. The Int values for Max and min ar UNSIGNED Ints so ther are no negative Values.
depthfloat = minOrMaxValue / (float) (0x7ffffffff)
also important to force a Float division … so cast the maxint to float befor division.

I never resolved this problem, except that I discovered that my colleagues’ computers were able to discern Z buffer information. The problem seems to be isolated to my machine, which is relatively current, and has the latest video display drivers. Oh well. Time for the next crisis du jour! :stuck_out_tongue:

Do you ask for a 16, 24, or 32 bits depth buffer ?

That’s a good question. I don’t believe I’m explicitly asking for any particular bit level…