From mouse coordinates to world coordinates through gluUnproject

Hi!

I am trying to retrieve my mouse coordinates (in a QGLWidget through Qt) to estimate its current 2D coordinates in the virtual world (all my vertices have z=0).

To do so, I wrote this:

modelViewMatrix = np.asarray(matView*matModel)
        viewport = glGetIntegerv(GL_VIEWPORT)
        
        z = 0

        x, y, z = GLU.gluUnProject(float(self.mouseX), float(self.mouseY), float(z), model = modelViewMatrix,
                             proj = np.asarray(matProj), view = viewport)

The matModel is always the identity matrix (numpy.eye(4)) while the matView and matProj matrices are computed thanks to the lookat(eye, target, up) and perspective(fovy, aspect, near, far) previously provided by GClements. The matView matrix is the only one which change through mouse events in my app.

I also provide my vertex shader:

VERTEX_SHADER = """
#version 440 core

uniform float scale;
uniform mat4 Model;
uniform mat4 View;
uniform mat4 Projection;

in vec2 position;   
in vec4 color;     

out vec4 v_color;


void main()
{
    gl_Position =  Projection*View*Model*vec4(scale*position, 0.0, 1.0);
    v_color = color;
    
}
 """

To test my code snippet, I draw a square with vertices (-1,-1), (-1,1), (1,-1) and (1,1). But when I move my mouse to a corner I do not get any ±1 coordinates.

So, I guess there is something wrong in my code…

Hope someone could help!

neon29

You don’t appear to be taking “scale” into consideration. Also, bear in mind that gluUnproject() expects the matrices in column-major order, whereas numpy typically uses row-major order.

Aside from that: there’s no reason to use gluUnproject() if you’re using numpy. You can obtain the inverse of a numpy matrix via the “I” attribute (e.g. np.asmatrix(m).I), then just use that to transform the window coordinates expressed as a column vector. I typically generate a matrix for the viewport transformation (that’s one of the functions in the code I posted previously), then use e.g.


p = (viewport * projection * modelview).I * np.matrix([[x,y,z,1]]).T
p = p.A.T[0]

BTW, keeping a copy of the viewport parameters around is preferable to reading them back via glGet().

I should remove this uniform as it always equals 1.

I will test your code :).

EDIT:

When I try this:

viewport = viewport(0, 0, self.width(), self.height()) # your viewport function
modelview = matModel*matView # np.asmatrix(np.eye(4))*lookat()
p = (viewport * projection * modelview).I * np.matrix([[self.mouseX,self.mouseY,0,1]]).T
p = p.A.T[0]

I get for my square:

upper left corner : (-0.33, -0.33)
upper right corner : (0.33, -0.33)
bottom left corner : (-0.33, 0.33)
bottom right corner (0.33, 0.33)

But when I move my camera through lookat, such values change while they should be constant no?

Where are these numbers coming from? I think you need to show more code.

Also: if you want (2D or 3D) Euclidean coordinates, don’t forget to divide by W.


p = p[:3] / p[3]

But if you were having the same issue with gluUnproject(), it isn’t just that.

These values are p[0] and p[1] for the four vertices.

Here is additional code:

The method called on the wheel event:

 def wheelEvent(self, e):

        zStep = -e.delta() / 100

        self.camEye[2] += zStep
        self.updateCamera()

The method which update the view matrix:


    def updateCamera(self):
        print('Camera update!')
        self.matView = fn.lookat(self.camEye, self.camTarget, self.camUp)
        glUseProgram(self.shaderProgram)
        loc = glGetUniformLocation(self.shaderProgram, 'View')
        glUniformMatrix4fv(loc, 1, False, np.ascontiguousarray(self.matView.T))
        glUseProgram(0)
        #print('Model', self.matModel)
        #print('Camera', self.matView)
        #print('Projection', self.matProj)
        self.updateGL()

The initializeGL function:

def initializeGL(self):
        print('initializeGL')

        self.viewport = fn.viewport(0, 0, self.width(), self.height())


        # compile shaders and program
        vertexShader = shaders.compileShader(VERTEX_SHADER, GL_VERTEX_SHADER)
        fragmentShader = shaders.compileShader(FRAGMENT_SHADER, GL_FRAGMENT_SHADER)
        self.shaderProgram = shaders.compileProgram(vertexShader, fragmentShader)
        print(self.shaderProgram)

        # Init uniforms    
        glUseProgram(self.shaderProgram)


        # Model matrix
        self.matModel = np.asmatrix(np.eye(4))
        print(self.matModel.dtype)

        loc = glGetUniformLocation(self.shaderProgram, 'Model')
        glUniformMatrix4fv(loc, 1, False, np.ascontiguousarray(self.matModel.T))

        # View matrix

        self.matView = fn.lookat(np.array([0, 0, 0]), np.array([0, 0, 10]), np.array([0, 1, 0]))
        loc = glGetUniformLocation(self.shaderProgram, 'View')
        glUniformMatrix4fv(loc, 1, False, np.ascontiguousarray(self.matView.T))
        # Projection matrix
        self.matProj = fn.perspective(fovy=45, aspect=1.0, n=1.0, f=100000.0)
        loc = glGetUniformLocation(self.shaderProgram, 'Projection')
        glUniformMatrix4fv(loc, 1, False, np.ascontiguousarray(self.matProj.T))

        glUseProgram(0)

EDIT:
I manage to solve the inverse (direct) problem consisting in computing, for a given vertex (for example (1,1)) the location where it should appear on screen. Indeed, I compare such coordinates with those of my mouse when it is located over the initial vertex ((1,1) = upper right) and it works. In fact, I simulated what OpenGL does as explained here: http://www.songho.ca/opengl/gl_transform.html
However, I do not manage to reverse the problem to find the vertex.

My code:

vertex = np.matrix([-1,-1,0,1]).T
        eyeCoord = self.matView*self.matModel*vertex
        clipCoord = self.matProj*eyeCoord
        ndcCoord = clipCoord/clipCoord[3]
        ndcCoord = ndcCoord[0:3,0]

        screenCoordX = self.width()*ndcCoord[0,0]/2 + self.width()/2
        screenCoordY = -self.height()*ndcCoord[1,0]/2 + self.height()/2

I solved it using glReadPixels to read the depth buffer. Here is the working code:


depth = glReadPixels(self.mouseX, self.height()-self.mouseY, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT)
z = 2*depth -1

mouseWorld = (self.viewport*self.matProj*self.matView*self.matModel).I*np.matrix([[self.mouseX, self.mouseY, z, 1]]).T

With these parameters, both the viewpoint and the vertex have Z=0, so you’ll end up with clip W=0 and ndcCoord[…] == Infinity.

If you’re trying to find the coordinates on a plane perpendicular to the view direction, that isn’t necessary. You can transform any point in the plane by the forward transformation to get the NDC Z value, then use that value for the reverse transformation to find a point on the plane with given X,Y window coordinates.

If you’re trying to pick a point on a rendered scene, you need to read the depth buffer to get the Z value. By themselves, 2D coordinates define a line in 3D space, not a point. If you want the line (e.g. to raycast against geometry), you can inverse-transform both [x,y,-1,1] and [x,y,1,1] to obtain the points where the line intersects the near and far planes.

[QUOTE=GClements;1281245]With these parameters, both the viewpoint and the vertex have Z=0, so you’ll end up with clip W=0 and ndcCoord[…] == Infinity.
[/QUOTE]

Yes, it fails for this set of parameters but as soon as I move the camera, it works.

Yes, indeed, I do not think about that.

As, [x,y,-1,1] and [x,y,1,1] defines a line in the 3D space, suppose I want to retrieve all the objects which are crossed/intersect such a lines, it is then just pure math (intersection between line and plane), right?

Line-polygon, line-sphere, line-cube, whatever suits the task.

For intersecting against a set of objects, it’s often simpler to transform the objects to NDC with transform-feedback mode and test against a 2D point. Testing a line against world/model-space coordinates may be preferable if you have some kind of spatial index (octree, bounding-box hierarchy, etc).