I’m having some trouble converting the depth written to a depth texture, together with the gl_Fragcoord.xy values to the eye position. I’m trying to derive the formula by inversing everything that happens to coordinates in eye space when rendering. However, it seems I derive another formula than the ones I found in other parts of this forum. Perhaps I made a mistake somewhere, or I just fail to see how the other formula’s are the same as mine.
This is my approach:
When rendering the following happens:
[eye coordinates: (xe, ye, ze, we)]
-> apply projection matrix
[clip coordinates: (xc, yc, zc, wc)]
-> apply normalizing with clip coordinate w
[normalized device coordinates: (xd, yd, zd)]
-> apply viewport transformation
[window coordinates: (xw, yw, zw)]
These transformations look like the following:
projection matrix (assuming -left == right, thus origin centered):
A 0 0 0
0 B 0 0
0 0 C D
0 0 -1 0
with:
A = 2n / w
B = 2n / h
C = -(f + n) / (f - n)
D = -(2 * f * n) / (f - n)
w: width (opengl coordinates)
h: height (opengl coordinates)
n: near (opengl coordinates)
f: far (opengl coordinates)
Normalizing to Normalized Device Coordinates (NDC)
(see OpenGL 3.3 spec, page 92):
xd = xc / wc
yd = yc / wc
zd = zc / wc
Viewport transformation (see OpenGL 3.3 spec, page 92):
xw = vw / 2 * xd + hvw
yw = vh / 2 * yd + hvh
zw = zd * (f - n) / 2 + (n + f) / 2
vw: viewport width (pixels)
vh: viewport height (pixels)
hvw: viewport width / 2, assuming (0,0) is origin (pixels)
hvh: viewport height / 2, assuming (0,0) is origin (pixels)
So… to go from depth as read from a depth texture and the gl_Fragcoord.xy, I figured I had to do the reverse. In that case the transformations look like this:
Inverse projection matrix (assuming -left == right):
1/A 0 0 0
0 1/B 0 0
0 0 0 -1
0 0 1/D C/D
(with A, B, C, D as before)
This seems to be correct (see Appendix F of the Red Book).
Inverse normalization:
xc = wc * xd
yc = wc * yd
zc = wc * zd
wc = ?
Inverse viewport transformation:
xd = 2 * (xw - hvw) / vw
yd = 2 * (yw - hvh) / vh
zd = 2 * zw / (f - n) - (n + f) / (f - n)
Now, I rewrite all these reverse transformations to try and obtain an (hopefully) easy formula to get the eye coordinates from texture depth + gl_Fragcoord.xy.
eye coord = Inverse Matrix * clip coord
= Inv.Mat * wc * NDC
= Inv.Mat * wc * Inv.Viewport
xe = 1/A * wc * 2 * (xw - hvw) / vw
ye = 1/B * wc * 2 * (yw - hvh) / vh
ze = -wc
we = 1/D * wc * [2 * zw / (f - n) - (n + f) / (f - n)] + C/D * wc
Rewrite `we' (apply 1/D and 1/C):
1/D = -(f - n) / (2 * f * n)
C/D = (f + n) / (2 * f * n)
we = wc * [(-zw) / (f * n) + (n + f) / (2 * f * n)]
+ wc * [(f + n) / (2 * f * n)]
Because we want to end up with an eye vector with `w'
coordinate equal to 1, we can devide (xe,ye,ze) by (we),
this allows removing of all wc factors:
xe = 1/A * 2 * (xw - hvw) / vw
ye = 1/B * 2 * (yw - hvh) / vh
ze = -1
we = [(-zw) / (f * n) + (n + f) / (2 * f * n)] + [(f + n) / (2 * f * n)]
Rewrite `we' a bit more:
we = (-zw) / (f * n) + (2 * (f + n)) / (2 * f * n)
= (f + n - zw) / (f * n)
And now multiply (xe,ye,ze,we) by the inverse of we:
xe = (f * n) / (f + n - zw) * 1/A * 2 * (xw - hvw) / vw
ye = (f * n) / (f + n - zw) * 1/B * 2 * (yw - hvh) / vh
ze = -(f * n) / (f + n - zw)
we = 1
Lets have a look at 1/A and 1/B:
1/A = w / 2n
1/B = h / 2n
xe = (f * n) / (f + n - zw) * w/n * (xw - hvw) / vw
ye = (f * n) / (f + n - zw) * h/n * (yw - hvh) / vh
ze = -(f * n) / (f + n - zw)
we = 1
Which could be written as:
xe = -ze * w/n * (xw / vw - 0.5)
ye = -ze * h/n * (yw / vh - 0.5)
ze = -(f * n) / (f + n - zw)
we = 1
However… I found some formula’s on these forums that are a bit different from mine, although there are also some similarities:
http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=239700#Post239700
mat4 m = gl_ProjectionMatrix;
float Z = m[3].z / (texture2DRect(G_Depth, gl_FragCoord.xy).x
* -2.0 + 1.0 - m[2].z);
vec3 modelviewpos = vec3(pos.xy/pos.z*Z,Z);
http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=239127#Post239127
float DepthToZPosition(in float depth) {
return camerarange.x / (camerarange.y - depth *
(camerarange.y - camerarange.x)) * camerarange.y;
}
float depth = texture2D(texture1,texCoord).x;
vec3 screencoord;
screencoord = vec3(
((gl_FragCoord.x / buffersize.x)- 0.5) * 2.0,
((-gl_FragCoord.y/buffersize.y)+0.5) * 2.0 / (buffersize.x/buffersize.y),
DepthToZPosition(depth));
screencoord.x *= screencoord.z;
screencoord.y *= -screencoord.z;
I fail to see how I could derive the above formula’s. And because many different formula’s can be found on the internet (perhaps to transform to different coordinate systems, or for DirectX instead of OpenGL, often it is unclear what transformation exactly is done) I thought I would derive the formule myself. However it does not seem to match with anything I found (except the more general examples that just say: use the inverse projection matrix).
Does anybody have a clue whether my general approach is correct? Or whether I am making a mistake somewhere? Also I’d like to know what the other formula’s are exactly about: do they transform to eye space coordinates as well?