Mouse coords to vertex?

Using xCode 9 on OSX 10.12

I know how to find mouse position on mouse_down event in window coordinates, but can anyone think of a way to backtrack this to an object or, even better, a particular vertex or triangle that is being pointed to?

I think it might be possible if I poll the depth buffer but haven’t got my mind wrapped around an approach yet.

[QUOTE=Goofus43;1288900]
I know how to find mouse position on mouse_down event in window coordinates, but can anyone think of a way to backtrack this to an object or, even better, a particular vertex or triangle that is being pointed to?

I think it might be possible if I poll the depth buffer but haven’t got my mind wrapped around an approach yet.[/QUOTE]

If you want to translate a 2D window position to a 3D position in eye space or object space, you need to read the corresponding pixel from the depth buffer, then apply the inverse projection transformation to get eye space, then (optionally) the inverse model-view transformation to get object space.

If you want to identify a specific primitive (picking), there are a variety of approaches. In legacy OpenGL, there’s selection mode (glRenderMode(GL_SELECT)). In modern OpenGL, you can render the scene using primitive IDs instead of colours (typically to a 1x1 framebuffer; you can cull anything outside of the pixel at the mouse position), or you can transform the 2D mouse position to a line then use triangle-line intersection calculations to identify the triangle (you can use the GPU to transform vertex positions to eye space using transform-feedback mode).

[QUOTE=GClements;1288905]If you want to translate a 2D window position to a 3D position in eye space or object space, you need to read the corresponding pixel from the depth buffer, then apply the inverse projection transformation to get eye space, then (optionally) the inverse model-view transformation to get object space.

If you want to identify a specific primitive (picking), there are a variety of approaches. In legacy OpenGL, there’s selection mode (glRenderMode(GL_SELECT)). In modern OpenGL, you can render the scene using primitive IDs instead of colours (typically to a 1x1 framebuffer; you can cull anything outside of the pixel at the mouse position), or you can transform the 2D mouse position to a line then use triangle-line intersection calculations to identify the triangle (you can use the GPU to transform vertex positions to eye space using transform-feedback mode).[/QUOTE]

Your bottom option is the one I would like and I think I understand your explaination as :

(As the eye is in the center of the projected image lateral displacement gives a vector dependant on contact distance. Looking at depth buffer for that pixel I can figure out distance and thus extract true world coordinates. From this I can probably extract nearest vertex by using an optional in the vertex shader program.)

Hope this is at least approximately correct interpretation. Might take a bit of fiddling but looks like a good direction to start.

Thanks a lot.