But it doesn’t work correctly, and I don’t know why… I know the rest of the shader is valid because if I save the xyz view space position for each fragment, I get the good results.
Here’s the shader code I use to compute projection and unprojection :
// get view position of a point
// Z is in [zNear, zFar] (so > 0)
vec3 getViewPosition(vec2 fragment, float depth, float tanFovY, float aspect)
{
vec3 pos;
pos.x = (fragment.x*2.0-1.0)*tanFovY*aspect;
pos.y = (fragment.y*2.0-1.0)*tanFovY;
pos.z = -1.0;
pos = pos*depth;
return pos;
}
// get frag position from view (above's reciproqual)
// view is a gl view pos so z < 0
vec2 getFragPosition(vec3 view, float tanFovY, float aspect)
{
// extract positive linear depth
float z = -view.z;
vec2 frag;
frag.x = 0.5 + (view.x*0.5 / (z*tanFovY*aspect));
frag.y = 0.5 + (view.y*0.5 / (z*tanFovY));
// done !
return frag;
}
In the link given to you, getting rid of matrices is exactly what is done. The derivation of the formula does use matrices, but once the formula is calculated, no matrices are involved anymore.