Compute eye space from window space

From OpenGL Wiki
Jump to navigation Jump to search

This page will explain how to recompute eye-space vertex positions given window-space vertex positions. This will be shown for several cases.

Definitions[edit]

Before we begin, we need to define some symbols:

Symbol Meaning
M The projection matrix
P The eye-space position, 4D vector
C The clip-space position, 4D vector
N The normalized device coordinate space position, 3D vector
W The window-space position, 3D vector
Vx, y The X and Y values passed to glViewport
Vw, h The width and height values passed to glViewport
Dn, f The near and far values passed to glDepthRange

From gl_FragCoord[edit]

gl_FragCoord.xyz is the window-space position W, a 3D vector quantity. gl_FragCoord.w contains the inverse of the clip-space W: .

Given these values, we have a fairly simple system of equations:

In a GLSL fragment shader, the code would be as follows:

vec4 ndcPos;
ndcPos.xy = ((2.0 * gl_FragCoord.xy) - (2.0 * viewport.xy)) / (viewport.zw) - 1;
ndcPos.z = (2.0 * gl_FragCoord.z - gl_DepthRange.near - gl_DepthRange.far) /
    (gl_DepthRange.far - gl_DepthRange.near);
ndcPos.w = 1.0;

vec4 clipPos = ndcPos / gl_FragCoord.w;
vec4 eyePos = invPersMatrix * clipPos;

This assumes the presence of a uniform called viewport, which is a vec4, matching the parameters to glViewport, in the order passed to that function. Also, this assumes that invPersMatrix is the inverse of the perspective projection matrix (it is a really bad idea to compute this in the fragment shader). Note that gl_DepthRange is a built-in variable available to the fragment shader.

From XYZ of gl_FragCoord[edit]

This case is mostly useful for deferred rendering techniques, but the last method is also very useful. In deferred rendering, we render the material parameters of our objects to images. Then, we make several passes over these images, loading those material parameters and performing lighting computations on them.

In the light pass, we need to reconstruct the eye-space vertex position in order to do lighting. However, we do not actually have gl_FragCoord; not for the fragment that produced the material parameters. Instead, we have the window-space X and Y position, from gl_FragCoord.xy, and we have the window-space depth, sampled by accessing the depth buffer, which was also saved from the deferred pass.

What we are missing is the original window-space W coordinate.

Therefore, we must find a way to compute it from the window-space XYZ coordinate and the perspective projection matrix. This discussion will assume your perspective projection matrix is of the following form:

[ xx  xx  xx  xx ]
[ xx  xx  xx  xx ]
[ 0   0   T1  T2 ]
[ 0   0   E1   0 ]

The xx mean "anything;" they can be any value you use in your projection. The 0's must be zeros in your projection matrix. T1, T2, and E1 can be any arbitrary terms, depending on how your projection matrix works.

If your projection matrix does not fit this form, then the following code will get a lot more complicated.

From window to ndc[edit]

We have the XYZ of window space:

Computing the NDC space from window space is the same as the above:

Just remember: the viewport and depth range parameters are, in this case, the parameters that were used to render the original scene. The viewport should not have changed of course, but the depth range certainly could (assuming you even have a depth range in the lighting pass of a deferred renderer).

From NDC to clip[edit]

For the sake of simplicity, here are the equations for going from NDC space to clip space:

Derivation[edit]

Deriving those two equiations is very non-trivial; it's a pretty big stumbling block. Let's start with what we know.

We can convert from clip space to NDC space, so we can go back:

The problem is that we don't have Cw. We were able to use gl_FragCoord.w to compute it before, but that's not available when we're doing this after the fact in a deferred lighting pass.

So how do we compute it? Well, we know that the clip space position was originally computed like this:

Therefore, we know that Cw was computed by the dot-product of P with the fourth row of M. And given our above definition of the fourth row of M, we can conclude:

Of course, this just trades one unknown for another. But we can use this. It turns out that Nz has something in common with this:

It's interesting to look at where Cz comes from. As before, we know that it was computed by the dot-product of P with the third row of M. And again, given our above definition for M, we can conclude:

We still have two unknown values here, Pz and Pw. However, we can assume that Pw is 1.0, as this is usually the case for eye space positions. Given that assumption, we only have one unknown, Pz, which we can solve for:

Now armed with Pz, we can compute Cw:

And thus, we can compute the rest of C from this:

From clip to eye[edit]

With the full 4D vector C computed, we can compute P just as before:

GLSL example[edit]

Here is some GLSL sample code for what this would look like:

uniform mat4 persMatrix;
uniform mat4 invPersMatrix;
uniform vec4 viewport;
uniform vec2 depthrange;

vec4 CalcEyeFromWindow(in vec3 windowSpace)
{
	vec3 ndcPos;
	ndcPos.xy = ((2.0 * windowSpace.xy) - (2.0 * viewport.xy)) / (viewport.zw) - 1;
	ndcPos.z = (2.0 * windowSpace.z - depthrange.x - depthrange.y) /
    (depthrange.y - depthrange.x);

	vec4 clipPos;
	clipPos.w = persMatrix[3][2] / (ndcPos.z - (persMatrix[2][2] / persMatrix[2][3]));
	clipPos.xyz = ndcPos * clipPos.w;

	return invPersMatrix * clipPos;
}

viewport is a vector containing the viewport parameters. depthrange is a 2D vector containing the glDepthRange parameters. The windowSpace vector is the first two components of gl_FragCoord, with the third coordinate being the depth read from the depth buffer.

Optimized method from XYZ of gl_FragCoord[edit]

The previous method is certainly useful, but it's a bit slow. We can significantly aid the computation of the eye-space position by using the vertex shader to provide assistance. This allows us to avoid the use of the inverse perspective matrix entirely.

This method is a two step process. We first compute Pz, the eye-space Z coordinate. Then use that to compute the full eye-space position.

The first part is actually quite easy. Most of the computations we used above were necessary because we needed Cw, which we had to do since we needed a full clip-space position. This optimized method only needs to get Pz, which we can compute directly from Wz, the depth range, and the three components of the projection matrix:

Note that this also means that we don't need the viewport settings in the fragment shader. We only need the depth range and the perspective matrix terms.

The trick to this method is what follows: how we go from Pz to the full eye-space position P. To understand how this works, here's a quick bit of geometry:

SimilarTriangle.png

The E in the diagram represents the eye position, which is the origin in eye-space. P is the position we want, and Pz is what we have. So, what do we need to get P from Pz? All we need is a vector direction that points towards P, but has a z component of 1.0. With that, we just multiply that vector by Pz; the result will necessarily be P.

So how do we get this vector?

That's where the vertex shader comes in. In deferred rendering, the Vertex Shader is often a simple pass-through shader, performing no actual computation and passing no user-defined outputs. So we are free to use it for something.

In the vertex shader, we simply construct a vector from the origin towards each vertex coordinate in eye space, that is the corresponding position on the near plane. We set the Z-distance of the vector to be -1.0. This constructs a vector that points into the scene in front of the camera in eye space. The extents of the near plane can easily be calculated from fovy and aspect ratio.

Linear interpolation of this value will make sure that every vector computed for a fragment will have a Z-value of -1.0. And linear interpolation will also guarantee that it points directly towards the fragment generated.

We could have computed this in the fragment shader, but why bother? That would require providing the viewport transform to the fragment shader(so that we can transform Wxy to eye space). And it's not like the VS is doing anything...

Once we have the value, we simply multiply it by Pz to get our eye-space position P.

Here is some shader code.

// Vertex shader
// Half the size of the near plane {tan(fovy/2.0) * aspect, tan(fovy/2.0) }
uniform vec2 halfSizeNearPlane; 

layout (location=0) in vec2 clipPos;
// UV for the depth buffer/screen access.
// (0,0) in bottom left corner (1, 1) in top right corner
layout (location=1) in vec2 texCoord;

out vec3 eyeDirection;
out vec2 uv;

void main()
{
  uv = texCoord;

  eye_direction = vec3((2.0 * halfSizeNearPlane * texCoord) - halfSizeNearPlane , -1.0);
  gl_Position = vec4(clipPos, 0, 1);
}

// Fragment shader
in vec3 eyeDirection;
in vec2 uv;

uniform mat4 persMatrix;
uniform vec2 depthrange;

uniform sampler2D depthTex;

vec4 CalcEyeFromWindow(in float windowZ, in vec3 eyeDirection)
{
  float ndcZ = (2.0 * windowZ - depthrange.x - depthrange.y) /
    (depthrange.y - depthrange.x);
  float eyeZ = persMatrix[3][2] / ((persMatrix[2][3] * ndcZ) - persMatrix[2][2]);
  return vec4(eyeDirection * eyeZ, 1);
}

void main()
{
  vec4 eyeSpace = CalcEyeFromWindow(texture(depthTex, uv), eyeDirection);
}

References[edit]