Refraction/Reflection

Refraction or Reflection is usually done by rendering the scene to texture and projecting the texture in screen space. There are several Demos(Delphi3D:Pool, ATI: Chimp Demo) using this technic and according to the GDC PDF, the water in HL2 is also done this way.
The texcoord of the refraction/reflection texture is often perturbed by the normal from the normalmap/bumpmap:

texcoord.xy=texcoord.xy+normal.xy*scale

Is this correct or is it better to transform the tangent and binormal vector into screen space and to use the following expression?

texcoord.xy=texcoord.xy+(normal.xscreen_tangent.xy+normal.yscreen_binormal.xy)*scale

I’ve seen strange artifacts when using the first expression, but it is widely used. Or did I understand here somewhat wrongly?

Thanks.

The “widely used” equation is a cheap hack, that can be made to run on lots of hardware, but it has artifacts.

This method (render-to-2D-texture) is used for planar textures, such as large bodies of water. If you want better reflections, you could calculate the reflection vector per pixel, using the eye-to-fragment and normal vectors, and use that as look-up into the texture. It’s still not possible to get general reflections, as there will be some “look around corners” artifacts, but it’ll be incrementally better than the fast hack.

I think your equation gets at this improvement, although it doesn’t formulate the actual reflection vector, so it looks like it would be half-way between the two solutions.

They’re both wrong of course because there is no view point that produces the correct 3D rendering for a complex refracted scene even for a mostly planar surface. I think you may be able to do something using depth readback and peturbation but is it worth it? Yep a dependent read would get you there but the dependent read has to account for the rendered z value IMHO, and there would still be occlusion anomalies.

I think in general it is impossible to do a physically correct rendering of refraction and perturbed reflection with OpenGL in realtime on todays hardware, the way to do this would be raytracing. The only solution is to find an approach that looks good and realistic enough to make it look like water, and obviously, there are some that succeeds with this, if you look at the water in HL2 or far cry, quite imrepssing.

Thanks for the explanations. Your’re right: The most important point is that it looks real. HL’s water is very impressive. It doesn’t have to be mathematically correct, because real time computer graphic is always an approximation.