I am trying to implement the following presentation…
http://developer.nvidia.com/object/siggraph-2008-HBAO.html
(or if you prefer the paper is here… http://artis.inrialpes.fr/Membres/Olivier.Hoel/ssao/nVidiaHSAO/2317-abstract.pdf)
However, I got myself confused around slide 11 of the presentation. I am stuck with the following question…
How do you calculate the tangent vector T (see slide 11) in a fragment program? The paper says to “intersect a view ray with the tangent plane defined by P and the surface normal n”??
I don’t understand, when you intersect a ray with a plane you get a point… not a vector (and that is going to be point P… which you already had)?
So to put this into an example, in my fragment program lets assume we are currently sampling in the v = (1,0) direction with a radius of 10 pixels and sample rate of every 2 pixels (lets ignore the random direction and jitter discussed later in the presentation for the moment).
So for this first sample, we would do the following…
-
Get the depth value at P by a texture2D lookup into the depth map. Call this Pdepth.
-
Get the depth value at this sample point (call it Sdepth) by a texture2D lookup using the current tex coords with an added offset of vec2(2/screenwidth, 0).
-
We then determine the horizon angle h = atan(( Sdepth - Pdepth) / length(vec2(2, 0)))? (or should that be 2/screenwidth?)
-
We then get the normal at P by doing a lookup into the normal map and normalize it. Call this N.
-
We then determine the tangent vector? I don’t know how this is done?
-
We determine the tangent angle t = atan(T.z / length(vec2(T.x, T.y)))
-
We then determine the AO for this sample as AO = sin h - sin t.
-
We move to the next sample point… and continue along integrating the results?
(note: I left out the optimizations for now, I want to get the basics working first)