Eye-Position from Depth and NDC, Compute Shader

Hello Community,

I am going to extend my 3D-Engine with a Motion Blur effect. Therefore i want to compute the pixel screen-velocity in a compute shader. Current scene frame and depthbuffer is rendered into two texture samplers.
I did a lot research about that topic and changed my code many times but finally the Motion Blur effect does not work as intended. Pixel velocity only considers camera moving but not camera rotating. I detected that my transformation from ndc to eye-space is not correct but i don’t know how to fix it.

I linearzie the the depth value from depthbuffer as follows:


lin_depth = 2 * near / (far + near - depth * (far - near))

I tested that linearization in my application and it seems to be correct.
Now i define the NDC as usual:


vec3 NDC = vec3(window.x * 2 - 1, window.y * 2 - 1, lin_depth);

I compute the clip-Position w as i found here Compute eye space from window space - OpenGL Wiki


float Cw = projectionMatrix[2][3] / (NDC.z - projectionMatrix[2][2]);

Now i should be able to get the eye-Position with the inverse Projection


vec4 eyePos = (inverseProjectionMatrix) * vec4(NDC * Cw, Cw);

but as i tested these computations in my application, the eye space position i get is wrong.

My projection Matrix is defined as follows:


1/(tanFOV*aspect)  0           0                           0
0                  1/tanFOV    0                           0
0                  0           (-zNear-zFar)/(zNear-zFar)  2*zFar*zNear/(zNear-zFar);
0                  0           1                           0

Remember at my Cw-calculation i send the matrix transposed to the glsl-shader.

Tanks in advance if somebody can help me!

greetings, Fynn Fluegge

I see two things here that seem… dubious:

I linearzie the the depth value from depthbuffer as follows:

I have no idea what this is doing, but I’m pretty sure it makes no sense for this computation.

There is no need to “linearize” the depth; that gets taken care of via the transformation to clip space. Depth “linearization” is usually for when you want a linear depth and don’t care what space it’s in. You very much care about what space it’s in. The next stage expects window coordinates (which are non-linear), and does not work if you feed it a “linearized” depth.

Remove it. Feed the actual depth values from the depth buffer into your next stage.

Remember at my Cw-calculation i send the matrix transposed to the glsl-shader.

We can’t remember something you haven’t told us. Also, when you say you transpose it, do you mean relative to OpenGL’s preferred ordering? Because you use projectionMatrix[2][3], which says “third column, fourth row”. Given the matrix you provide, that really ought to be projectionMatrix[3][2].

You shouldn’t send GLSL row-major matrices.

Either way, your projection matrix itself looks unusual. It seems like you negated the Z axis, relative to the standard OpenGL projection matrix. That may be fine (depending on your range near/far and depth near/far values).

Thanks for your reply.
I linearize the depthbuffer value relative to my near and far planes. I thought this is necessary to get from NDC space to clip space. When I use the non-linearized depth value, the pixel velocity for camera movement is getting as intended now. However, when the camera rotates, the pixel velocity is zero. I think I don’t reach the correct clip space.

However, when the camera rotates, the pixel velocity is zero. I think I don’t reach the correct clip space.

That’s an odd conclusion to reach. It’s not clear how you’re computing “pixel velocity”.

But if you want to verify your computations, you can always (temporarily) write the actual position out to an RGBA32F image, then read it back and see how different it is.

Up to now my motion blur effect works fine when the camera moves. But when I rotate the camera no motion blur effect occurs. The pixel velocity is zero for all pixels if the view space rotates.
Here the code of my coordinate space transformations:


vec3 NDC  = vec3(window.x * 2 - 1, window.y * 2 - 1, depth);
float Clipw = projectionMatrix[2][3] / (NDC.z - projectionMatrix[2][2]);
vec4 clipPos = vec4(NDC * Clipw, Clipw);

vec4 viewPos  =  inverseProjectionMatrix * clipPos;
vec4 worldPos =  inverseViewMatrix * viewPos;
vec4 previousPos = previousViewMatrix * worldPos;
previousPos  = projectionMatrix * previousPos;
previousPos /= previousPos.w;

I know i should transform view and projection matrix by multiplication into one viewprojection matrix in my application, but i do this here seperate to play around.
Maybe it’s a common error that i get the same world position of the previous frame when i rotate the view space. I really don’t know how to fix it.

thanks in advance!

Maybe it’s a common error that i get the same world position of the previous frame when i rotate the view space.

If the only thing that has changed is the camera matrix, then clearly the world position will not have changed. After all, it’s not the objects in the world that have changed; it’s the view of the scene.

So if you’re doing some kind of camera-based motion blur, then it should be based on the positions of objects in camera space, not world space.

Alright, the world position never changes, I have static object and only camera based-motion blur. Sure, the pixel position only changes in view space. But when I rotate the view space, the pixel position won’t change in my calculation of the previous position. Only Camera translation will be affected.

Ultimately, you have a bug somewhere in your code. Some matrix is not what you think it is. Some rotation or translation is not going where you think it is. Or some operation is not doing what you think it is.

At this point, the only way someone could help is if you give them your code and have them debug it for you.

I already suggested one way of narrowing down where the bug could be. Write the position to a texture, then use that value rather than what you calculate from the depth and see if there’s a difference.

Yes, that’s what i gonna do now, thanks for your effort.