Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 3 of 3

Thread: Mouse coords to vertex?

  1. #1
    Junior Member Newbie
    Join Date
    Oct 2017
    Location
    SE Pennsylvania on 6 acre wooded hillside
    Posts
    10

    Mouse coords to vertex?

    Using xCode 9 on OSX 10.12

    I know how to find mouse position on mouse_down event in window coordinates, but can anyone think of a way to backtrack this to an object or, even better, a particular vertex or triangle that is being pointed to?

    I think it might be possible if I poll the depth buffer but haven't got my mind wrapped around an approach yet.

  2. #2
    Senior Member OpenGL Guru
    Join Date
    Jun 2013
    Posts
    2,525
    Quote Originally Posted by Goofus43 View Post
    I know how to find mouse position on mouse_down event in window coordinates, but can anyone think of a way to backtrack this to an object or, even better, a particular vertex or triangle that is being pointed to?

    I think it might be possible if I poll the depth buffer but haven't got my mind wrapped around an approach yet.
    If you want to translate a 2D window position to a 3D position in eye space or object space, you need to read the corresponding pixel from the depth buffer, then apply the inverse projection transformation to get eye space, then (optionally) the inverse model-view transformation to get object space.

    If you want to identify a specific primitive (picking), there are a variety of approaches. In legacy OpenGL, there's selection mode (glRenderMode(GL_SELECT)). In modern OpenGL, you can render the scene using primitive IDs instead of colours (typically to a 1x1 framebuffer; you can cull anything outside of the pixel at the mouse position), or you can transform the 2D mouse position to a line then use triangle-line intersection calculations to identify the triangle (you can use the GPU to transform vertex positions to eye space using transform-feedback mode).

  3. #3
    Junior Member Newbie
    Join Date
    Oct 2017
    Location
    SE Pennsylvania on 6 acre wooded hillside
    Posts
    10
    Quote Originally Posted by GClements View Post
    If you want to translate a 2D window position to a 3D position in eye space or object space, you need to read the corresponding pixel from the depth buffer, then apply the inverse projection transformation to get eye space, then (optionally) the inverse model-view transformation to get object space.

    If you want to identify a specific primitive (picking), there are a variety of approaches. In legacy OpenGL, there's selection mode (glRenderMode(GL_SELECT)). In modern OpenGL, you can render the scene using primitive IDs instead of colours (typically to a 1x1 framebuffer; you can cull anything outside of the pixel at the mouse position), or you can transform the 2D mouse position to a line then use triangle-line intersection calculations to identify the triangle (you can use the GPU to transform vertex positions to eye space using transform-feedback mode).
    Your bottom option is the one I would like and I think I understand your explaination as :

    (As the eye is in the center of the projected image lateral displacement gives a vector dependant on contact distance. Looking at depth buffer for that pixel I can figure out distance and thus extract true world coordinates. From this I can probably extract nearest vertex by using an optional in the vertex shader program.)

    Hope this is at least approximately correct interpretation. Might take a bit of fiddling but looks like a good direction to start.

    Thanks a lot.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •