z-axis questions

Hi

A couple of questions:

  1. If the camera is set at (0,0,0) and the negative z-axis points into the screen and the zNear and zFar planes are set to 1 and 1,000 using gluPerspective() then how can an object be viewed as we are looking down the -z axis with the near/far planes behind the viewer?

  2. What is the relationship between the near and far z values set with glDepthRange() and those using gluPerspective()? I’m using the method described by opengl.org to develop a picking-ray using gluUnProject() with z=0 for the near plane and z=1 for the far-plane. Do I have to map the values returned by gluUnProject() to the near-far z clip planes set by gluPerspective()?

    public Ray3D mouseRayWorldCoordinates()
    {
    GL2 gl2 = gl;

     // mouse coordinates
     int mouseX = xCoord;
     int mouseY = yCoord;
     
     int   viewport[]   = GLUtilities.viewportArray(gl2);
     float mvMatrix[]   = GLUtilities.modelViewMatrixAsFloatArray(gl2);
     float projMatrix[] = GLUtilities.projectionMatrixAsFloatArray(gl2);
     
     int glY = viewport[3] - (int)mouseY - 1; // GL y coord pos - note viewport[3] is height of window in pixels
    
     // world near - the gluUnProject z value is the screen depth value which goes from 0.0 -> 1.0
     System.out.println("Coordinates at cursor are (" + mouseX + ", " + mouseY);
     float wcoordNear[] = new float[4];
     boolean ok = glu.gluUnProject((float) mouseX, (float)glY, 0.0f,mvMatrix, 0,projMatrix, 0,viewport, 0,wcoordNear, 0);
     System.out.println("World coords at z=0.0 are ( " //
                        + wcoordNear[0] + ", " + wcoordNear[1] + ", " + wcoordNear[2]
                        + "); ok: " + ok);
     // world far
     float wcoordFar[]  = new float[4];
     ok = glu.gluUnProject((float) mouseX, (float) glY, 1.0f,mvMatrix, 0,projMatrix, 0,viewport, 0,wcoordFar, 0);
     System.out.println("World coords at z=1.0 are (" //
                        + wcoordFar[0] + ", " + wcoordFar[1] + ", " + wcoordFar[2]
                        + "); ok: " + ok);
     
     // direction vector is far point - near point
     Vector3D dirVector = new Vector3D(wcoordFar[0]-wcoordNear[0],wcoordFar[1]-wcoordNear[1],wcoordFar[2]-wcoordNear[2]);
     dirVector.normalise();
     Point3D viewerLocation = GLUtilities.cameraLocation(gl2);
     Ray3D mouseRay = new Ray3D(viewerLocation,dirVector);
     //Ray3D mouseRay = new Ray3D(new Point3D(wcoordNear[0],wcoordNear[1],wcoordNear[2]),dirVector);
     
     return mouseRay;
    

    }

Thanks

Graham

The answer to your first question: zNear and zFar specify the distance from the origin in the direction of the -z axis.
Thus when you say zFar = 1000, the far clipping plane is actually a plane whose normal is z-axis and passes through the point (0, 0, -1000).

glDepthRange specifies the mapping between world space zNear and zFar, and window space nearDepth and farDepth respectively. So you don’t need to do the mapping yourself. but adjusting the values can help with some visual glitches depending on your scene…

The problem I see with your doubts is that you are mixing concepts. You are mixing the idea of camera space with the space created by the projection matrix (usualy called clipping space).

I believe the best you can do right now, assuming you are educating yourself on 3D and not going against the clock on a project that needs to be done asap, is to go back to the basics and understand how matrices work, why they are needed and the math behind them.

Luckily, you don’t need to be a mathematical guruu to grasp a good understanding of the basics and doing so will facilitate many things later.

Unlike what some text books on 3D seem to propose, I believe it’s rather important to understand correctly how transformations work and why they work and to stop thinking about matrices as black boxes that magicaly do what you want and instead know why they do so.

glDepthRange specifies the mapping between world space zNear and zFar, and window space nearDepth and farDepth respectively.

No, glDepthRange specifies the mapping between clip-space zNear and zFar and window-space zNear and zFar.

No, glDepthRange specifies the mapping between clip-space zNear and zFar and window-space zNear and zFar.

I don’t think so. Clip planes exist prior to clip space. So they are defined in world space first.

Hi

Thanks for your replies but I’m still no nearer in trying to develop a method that returns a picking ray.

My method mouseRayWorldCoordinates() uses the prescribed opengl.org method and uses gluUnProject() to determine the world-coor of the mouse pointer. It specifies the nearZ=0 and farZ=1 of the “window-space” and then builds a picking vector emanating from the camera.

However hard I try I cannot visualise this picking ray. I can draw and visualise other lines and rays but the returned picking ray from the mouseRayWorldCoordinates() is completely wrong.

Has anyone else solved this basic and common requirement for object selection/picking?

Graham

I don’t think so. Clip planes exist prior to clip space.

glDepthRange has nothing to do with clip planes; it defines how the range of NDC space (I was wrong to say clip-space) is mapped to window space. Check 2.13.1 of the GL 3.3 core spec.

Has anyone else solved this basic and common requirement for object selection/picking?

It’s not a requirement for picking and selection. The typical way to implement selection is to render a special version of the scene where the colors of the object are mapped to specific values. Then, you just read the particular pixel that the user clicked on; that color maps to the object that was selected.

glDepthRange has nothing to do with clip planes

Right. But selecting proper values for depth range depends on how far the front and back clip planes are positioned. This is one reason we need to have control over depth range.

The typical way to implement selection is to render a special version of the scene where the colors of the object are mapped to specific values.

Works but it’s not the recommended way to do selection. Retrieved color does not necessarily maintain the same precision/values when converted from different color formats making it not portable.

I would rather use ray-triangle intersection test and sort by depth value.