gluPickMatrix Implementation

I am implementing the glu matrices for my matrix library and there is a
gluPickMatrix(). is this still usefull in a 4.0+ core context opengl?
and what exacty does it do?

from what i can tell it takes the picked viewport region and does an x,y
scale and translation which is concatenated with the camera matrix. so
when you multiply points by the view matrix the picked region is scaled
to the viewport. you then render a full viewport of the picked region.
is that correct?

there is this comment in the sgi source as an explanation.

/* translate and scale the picked region to the entire window */

[QUOTE=chaikin;1259882]I am implementing the glu matrices for my matrix library and there is a
gluPickMatrix(). is this still usefull in a 4.0+ core context opengl?[/QUOTE]
It is not. All legacy picking mechanism is depreciated.

This is a Mesa 3D implementation:


void gluPickMatrix(GLdouble x, GLdouble y, GLdouble deltax, GLdouble deltay, GLint viewport[4])
{
   if (deltax <= 0 || deltay <= 0) { 
       return;
     }

   /* Translate and scale the picked region to the entire window */
   glTranslatef((viewport[2] - 2 * (x - viewport[0])) / deltax, (viewport[3] - 2 * (y - viewport[1])) / deltay, 0);
   glScalef(viewport[2] / deltax, viewport[3] / deltay, 1.0);
}
 

As far as I can recall, the idea was to “zoom in” to the region around the mouse cursor (smaller fov and a deformed view frustum if the point is not in the center of the screen), rerender the scene into an ID buffer and then fetch the object ID from the ID buffer to determine the hit object.

You probably have a hirarchical scene representation, so I would actually go the other way and use the mouse position from the window system to calculate the direction of a ray, starting at the camera position and do a ray cast into the scene hirarchy to determine the picked object (performing hit tests with the bounding volumes), rather then setting up an ID buffer, rerendering the scene in a way that clips most triangles and transfering data from the GPU to the CPU wich requires syncronisation of the two.

[EDIT]: Aparently I was too slow at typing^^

Not exactly. The scene is rendered using a “new” and narrow frustum, and whenever something intersects that frustum, the name-stack (stack with IDs) is read and copied to a selection buffer (allocated by the user). The stack is copied from the bottom up to the top.

Yes, the raycast is an alternative, but as we can see, it is a pretty complicated solution. :slight_smile:
That’s why I argued that legacy OpenGL had more elegant (and for the programmers easier) solution (aside the story about HW acceleration).

thanks for the help guys. it seems not very usefull in isolation. you need to add the other parts of the classic gl pipeline, selection stack etc.

@Aleksandar

it looks like mesa lifted there code from the sgi code. the matrix is sgi code is the same. and indeed it simply makes a frustum the size of the pick region then scales it to viewport. almost like the viewport matrix itself.

its a bit hard to grasp tho because of how classic gl had so much going on under the hood with all the stacks that interact with each other and such. but it seems like a simple viewport type matix.

There is no multiple stack involved in legacy OpenGL picking. Just a single stack - a stack of objects’ IDs.
If you would like to mimic the way it works, you should be able to:

  1. draw a single object at the time
  2. after each drawing check whether framebuffer is changed
  3. if it is changed read depth buffer and write: number of entries in the ID stack, normalized Z-value and all values in the stack (bottom to top); in the selection buffer

The legacy OpenGL ignores depth when executing step 2. Using pre-Z in avoiding overdrawing would actually speedup the process and eliminate background objects from the selection buffer. The challenge is just to efficiently perform step 2.

interesting technique, i will keep that in mind.