# Difference between revisions of "Compute eye space from window space"

Jump to: navigation, search

This page will explain how to recompute eye-space vertex positions given window-space vertex positions. This will be shown for two cases. Case 1 uses gl_FragCoord​ in its entirety. Case 2 uses only gl_FragCoord.xyz​. Both require access to the projection matrix.

## Definitions

Before we begin, we need to define some symbols:

Symbol Meaning
M The projection matrix
P The eye-space position, 4D vector
C The clip-space position, 4D vector
N The normalized device coordinate space position, 3D vector
W The window-space position, 3D vector
Vx, y The X and Y values passed to
Vw, h The width and height values passed to
Dn, f The near and far values passed to

## From gl_FragCoord

gl_FragCoord.xyz​ is the window-space position W, a 3D vector quantity. gl_FragCoord.w​ contains the inverse of the clip-space W: $gl\_FragCoord_w = \tfrac{1}{C_w}$.

Given these values, we have a fairly simple system of equations:

\begin{align} \vec N & = \begin{bmatrix} \tfrac{(2 * W_x) - (2 * V_x)}{V_w} - 1\\ \tfrac{(2 * W_y) - (2 * V_y)}{V_h} - 1\\ \tfrac{(2 * W_z) - D_f - D_n}{D_f - D_n} - 1 \end{bmatrix}\\ \vec C_{xyz} & = \frac{\vec N}{gl\_FragCoord_w}\\ C_{w} & = \frac{1}{gl\_FragCoord_w}\\ \vec P &= M^{-1}\vec C \end{align}

In a GLSL fragment shader, the code would be as follows:

vec4 ndcPos;
ndcPos.xy = ((2.0 * gl_FragCoord.xy) - (2.0 * viewport.xy)) / (viewport.zw) - 1;
ndcPos.z = (2.0 * gl_FragCoord.z - gl_DepthRange.near - gl_DepthRange.far) /
(gl_DepthRange.far - gl_DepthRange.near);
ndcPos.w = 1.0;

vec4 clipPos = ndcPos / gl_FragCoord.w;
vec4 eyePos = invPersMatrix * clipPos;


This assumes the presence of a uniform called viewport​, which is a vec4​, matching the parameters to , in the order passed to that function. Also, this assumes that invPersMatrix​ is the inverse of the perspective projection matrix (it is a really bad idea to compute this in the fragment shader). Note that gl_DepthRange​ is a built-in variable available to the fragment shader.

## From XYZ of gl_FragCoord

This case is mostly useful for deferred rendering techniques. In deferred rendering, we render the material parameters of our objects to images. Then, we make several passes over these images, loading those material parameters and performing lighting computations on them.

In the light pass, we need to reconstruct the eye-space vertex position in order to do lighting. However, we do not actually have gl_FragCoord​; not for the fragment that produced the material parameters. Instead, we have the window-space X and Y position, from gl_FragCoord.xy​, and we have the window-space depth, sampled by accessing the depth buffer, which was also saved from the deferred pass.

What we are missing is the original window-space W coordinate.

Therefore, we must find a way to compute it from the window-space XYZ coordinate and the perspective projection matrix. This discussion will assume your perspective projection matrix is of the following form:

[ xx  xx  xx  xx ]
[ xx  xx  xx  xx ]
[ 0   0   T1  T2 ]
[ 0   0   E1   0 ]


The xx​ mean "anything;" they can be any value you use in your projection. The 0's must be zeros in your projection matrix. T1​, T2​, and E1​ can be any arbitrary terms, depending on how your projection matrix works.

If your projection matrix does not fit this form, then the following code will get a lot more complicated.

### From window to ndc

We have the XYZ of window space:

$\vec W = \begin{bmatrix} gl\_FragCoord.x\\ gl\_FragCoord.y\\ fromDepthTexture \end{bmatrix}$

Computing the NDC space from window space is the same as the above:

$\vec N = \begin{bmatrix} \tfrac{(2 * W_x) - (2 * V_x)}{V_w} - 1\\ \tfrac{(2 * W_y) - (2 * V_y)}{V_h} - 1\\ \tfrac{(2 * W_z) - D_f - D_n}{D_f - D_n} - 1 \end{bmatrix}$

Just remember: the viewport and depth range parameters are, in this case, the parameters that were used to render the original scene. The viewport should not have changed of course, but the depth range certainly could (assuming you even have a depth range in the lighting pass of a deferred renderer).

### From NDC to clip

For the sake of simplicity, here are the equations for going from NDC space to clip space:

\begin{align} C_w & = \tfrac{T2}{N_z - \tfrac{T1}{E1}} \vec C_xyz & = \vec N * C_w\\ \end{align}

#### Derivation

Deriving those two equiations is very non-trivial; it's a pretty big stumbling block. Let's start with what we know.

We can convert from clip space to NDC space, so we can go back:

\begin{align} \vec N & = \tfrac{\vec C}{C_w}\\ \vec C & = \vec N * C_w \end{align}

The problem is that we don't have Cw. We were able to use gl_FragCoord.w​ to compute it before, but that's not available when we're doing this after the fact in a deferred lighting pass.

So how do we compute it? Well, we know that the clip space position was originally computed like this:

$\vec C = M * \vec P$

Therefore, we know that Cw was computed by the dot-product of P with the fourth row of M. And given our above definition of the fourth row of M, we can conclude:

\begin{align} C_w & = E1 * P_z\\ \vec N & = \tfrac{\vec C}{E1 * P_z} \end{align}

Of course, this just trades one unknown for another. But we can use this. It turns out that Nz has something in common with this:

$N_z = \tfrac{C_z}{E1 * P_z}$

It's interesting to look at where Cz comes from. As before, we know that it was computed by the dot-product of P with the third row of M. And again, given our above definition for M, we can conclude:

\begin{align} C_z & = T1 * P_z + T2 * P_w\\ N_z & = \tfrac{T1 * P_z + T2 * P_w}{E1 * P_z} \end{align}

We still have two unknown values here, Pz and Pw. However, we can assume that Pw is 1.0, as this is usually the case for eye space positions. Given that assumption, we only have one unknown, Pz, which we can solve for:

\begin{align} P_w & = 1.0\\ N_z & = \tfrac{T1 * P_z + T2}{E1 * P_z}\\ N_z & = \tfrac{T1}{E1} + \tfrac{T2}{E1 * P_z}\\ N_z - \tfrac{T1}{E1} & = \tfrac{T2}{E1 * P_z}\\ E1 * P_z & = \tfrac{T2}{N_z - \tfrac{T1}{E1}}\\ P_z & = \tfrac{T2}{E1 * (N_z - \tfrac{T1}{E1})}\\ P_z & = \tfrac{T2}{E1 * N_z - T1} \end{align}

Now armed with Pz, we can compute Cw:

\begin{align} C_w & = E1 * P_z\\ C_w & = \tfrac{T2}{N_z - \tfrac{T1}{E1}} \end{align}

And thus, we can compute the rest of C from this:

\begin{align} \vec C_xyz & = \vec N * C_w\\ \vec C_xyz & = \vec N * (\tfrac{T2}{N_z - \tfrac{T1}{E1}})\\ \end{align}

### From clip to eye

With the full 4D vector C computed, we can compute P just as before:

$\vec P = M^{-1}\vec C$

### GLSL example

Here is some GLSL sample code for what this would look like:

uniform mat4 persMatrix;
uniform mat4 invPersMatrix;
uniform vec4 viewport;
uniform vec2 depthrange;

vec4 CalcEyeFromWindow(vec3 windowSpace)
{
vec3 ndcPos;
ndcPos.xy = ((2.0 * windowSpace.xy) - (2.0 * viewport.xy)) / (viewport.zw) - 1;
ndcPos.z = (2.0 * windowSpace.z - depthrange.x - depthrange.y) /
(depthrange.y - depthrange.x);

vec4 clipPos;
clipPos.w = persMatrix[3][3] / (ndcPos.z - (persMatrix[4][3] / persMatrix[3][4]));
clipPos.xyz = ndcPos * clipPos.w;

vec4 eyePos = invPersMatrix * clipPos;
}


viewport​ is a vector containing the viewport parameters. depthrange​ is a 2D vector containing the parameters.