View Full Version : How to convert screen coordinate to 3D world coordinate

happy_sweet_juice

06-07-2017, 09:35 AM

I write a little program with glm.

https://pastebin.com/mcz9b0Zy

Firstly I convert 3D world coordinate to screen coordinate, then back. But results is strength. Where is my mistake?

GClements

06-07-2017, 10:52 AM

Where is my mistake?

Here:

//Create input point from screen with coordinate(x, y, z, 1) z is unknows so it will be 1

glm::vec4 screenInputPoint = glm::vec4(screenPoint[0], screenPoint[1], 1, 1);

You need the Z coordinate, which you may be able to get from the depth buffer. Don't forget to divide Z by W, as with X and Y.

Even then, reversing the process will only give you equivalent Euclidean coordinates; it can't recover the original object-space W coordinate.

happy_sweet_juice

06-07-2017, 11:16 AM

But as I know it is one method.

I want to use ray caste method, so as i think i don't need a z coordinate.

GClements

06-07-2017, 09:28 PM

I want to use ray caste method, so as i think i don't need a z coordinate.

For a ray cast, you need to choose two sets of screen coordinates, each with the same X and Y coordinates but with different Z coordinates, then transform them to eye/world/object space. The ray is the line which passes through those two points; any point on that line will have the same screen-space X and Y coordinates.

Essentially, you're transforming a ray in screen-space to a ray in world space, by transforming two of the points on that ray. Given any two distinct points on a line, every point on that line can be expressed as a linear combination of those two points.

happy_sweet_juice

06-08-2017, 12:37 AM

Is this important to choose different Z before transform, because i think that it is possible to convert one point and create ray in camera sign direction.

GClements

06-08-2017, 09:49 AM

Is this important to choose different Z before transform, because i think that it is possible to convert one point and create ray in camera sign direction.

Only one point needs to be transformed to eye space, as the eye-space point (0,0,0,W) always lies on the ray.

So you can transform e.g. (X,Y,0,1) to eye space, then transform both the result and (0,0,0,1) to world space (or object space) to get the ray. Note that transforming (0,0,0,1) just means taking the right-hand column from the inverse model-view matrix.

happy_sweet_juice

06-10-2017, 09:34 AM

Thankyou, for help. I will use depth buffer. It work and as i think not so cost.

Powered by vBulletin® Version 4.2.5 Copyright © 2019 vBulletin Solutions Inc. All rights reserved.