Reconstructing Eye Coord from FragCoord?

Can this be done? SSAO is the goal, my code doesn’t work, but here it is, anyone have thoughts? This is based upon the recent Starcraft 2 talk. I take the screen pixel location, convert to normalized device coords [-1,1], then divide by the EyeZ.


	EyePosition.xyz = gl_FragCoord.xyz;
	EyePosition.w = 1.0;
	EyePosition.x = EyePosition.x / ScreenPixelSize.x * 2.0 - 1.0;
	EyePosition.y = EyePosition.y / ScreenPixelSize.y * 2.0 - 1.0;
	EyePosition.xyz = EyePosition.xyz * EyeZ;

And then add some random offset vector, and recompute the screen space coord, which should be the opposite procedure…


		SampleEyePosition.xyz = SampleEyePosition.xyz * OneOverEyeZ;
		SampleEyePosition.x = SampleEyePosition.x + 1.0 * 0.5 * ScreenPixelSize.x;
		SampleEyePosition.y = SampleEyePosition.y + 1.0 * 0.5 * ScreenPixelSize.y;

I am sure I’m too tired to think clearly and am doing something stupid, any ideas?

If I am not mistaken, gl_FragCoord holds window relative coordinates and 1/w as fourth coordinate.

So I would something like:


EyePosition.xyz = gl_FragCoord.xyz;
EyePosition.x = (EyePosition.x - windowCenter.x) * (2.0/ScreenPixelSize.x);
EyePosition.y = (EyePosition.y - windowCenter.y) * (2.0/ScreenPixelSize.y);

// now EyePosition is fragment normalized device coordinates
// then divide by EyeZ
EyePosition.xyz = EyePosition.xyz / EyeZ;

One more thing, you said “divide by EyeZ” but you do a multiplication in your code… is that correct?

Could you post the link to this talk ?

thanks in advance.

How do you draw SSAO pass: full-screen quad or do you resend all geometry?

Full screen quad w/ depth buffer texture & depending on the technique, also a normal buffer.

Babis has it right, it’s a full screen quad that sampled eyeZ from a depth map and eye space surface normals to do the calculation.

Here is a link to that talk.

http://ati.amd.com/developer/SIGGRAPH08%5CChapter05-Filion-StarCraftII.pdf

"
Pixel shader 3.0 offers the VPOS semantic which provides the pixel shader with the x and y coordinates of the pixel being rendered on the screen. These coordinates can be normalized to the [‐1…1] range based on screen width and height to provide a normalized eye‐to‐pixel vector. Multiplying by the depth will provide us with the pixel’s view space position
"

So yeah, apparently multiply is what they meant.

Never mind then. Another possiblity for trivial bug: did you take into account the negative Z axis direction in eye space? GL uses different convention than DX.

This has the code you need:
http://www.leadwerks.com/ccount/click.php?id=50

It could probably be optimized a little, but the way it is written was the most intuitive to me.

Well, after a lot of playing around, I have results. I was doing SSAO by sampling a disk in screen space, the radius scaled by resolution. I am now sampling within a sphere in Eye space, then reprojecting to screen space to get my depth and normal lookup.

The difference if very small, the results are quite similar. I have some nearZ artifacts with the Eye space math, and the sample radius can be constant, and not scale with actual screen space size, which is convenient.

I guess that I’m surprised they are so similar.

I also tried to do this in world space, but have something wrong with the math. I cannot imagine what though. I simply multiply my eye space point by the inverse view matrix, add my offset, then multiply by the view matrix. No idea how that is going awry.