How to convert screen coordinate to 3D world coordinate

I write a little program with glm.
https://pastebin.com/mcz9b0Zy
Firstly I convert 3D world coordinate to screen coordinate, then back. But results is strength. Where is my mistake?

Here:

You need the Z coordinate, which you may be able to get from the depth buffer. Don’t forget to divide Z by W, as with X and Y.

Even then, reversing the process will only give you equivalent Euclidean coordinates; it can’t recover the original object-space W coordinate.

But as I know it is one method.
I want to use ray caste method, so as i think i don’t need a z coordinate.

For a ray cast, you need to choose two sets of screen coordinates, each with the same X and Y coordinates but with different Z coordinates, then transform them to eye/world/object space. The ray is the line which passes through those two points; any point on that line will have the same screen-space X and Y coordinates.

Essentially, you’re transforming a ray in screen-space to a ray in world space, by transforming two of the points on that ray. Given any two distinct points on a line, every point on that line can be expressed as a linear combination of those two points.

Is this important to choose different Z before transform, because i think that it is possible to convert one point and create ray in camera sign direction.

Only one point needs to be transformed to eye space, as the eye-space point (0,0,0,W) always lies on the ray.

So you can transform e.g. (X,Y,0,1) to eye space, then transform both the result and (0,0,0,1) to world space (or object space) to get the ray. Note that transforming (0,0,0,1) just means taking the right-hand column from the inverse model-view matrix.

Thankyou, for help. I will use depth buffer. It work and as i think not so cost.