Can this be done? SSAO is the goal, my code doesn’t work, but here it is, anyone have thoughts? This is based upon the recent Starcraft 2 talk. I take the screen pixel location, convert to normalized device coords [-1,1], then divide by the EyeZ.
"
Pixel shader 3.0 offers the VPOS semantic which provides the pixel shader with the x and y coordinates of the pixel being rendered on the screen. These coordinates can be normalized to the [‐1…1] range based on screen width and height to provide a normalized eye‐to‐pixel vector. Multiplying by the depth will provide us with the pixel’s view space position
"
Never mind then. Another possiblity for trivial bug: did you take into account the negative Z axis direction in eye space? GL uses different convention than DX.
Well, after a lot of playing around, I have results. I was doing SSAO by sampling a disk in screen space, the radius scaled by resolution. I am now sampling within a sphere in Eye space, then reprojecting to screen space to get my depth and normal lookup.
The difference if very small, the results are quite similar. I have some nearZ artifacts with the Eye space math, and the sample radius can be constant, and not scale with actual screen space size, which is convenient.
I guess that I’m surprised they are so similar.
I also tried to do this in world space, but have something wrong with the math. I cannot imagine what though. I simply multiply my eye space point by the inverse view matrix, add my offset, then multiply by the view matrix. No idea how that is going awry.