Implementing Bidirectional Ray Tracing?

Hey all,

I was wondering if anyone knew how to go about implementing bidirectional ray tracing? I understand you shoot rays from the eye and the light source and calculate what will be rendered by using data from both rays.

I was wondering if anyone knew what to calculate? In terms of ray tracing, when a ray is reflected off an object, it should retain some information about that object. I’ve researched online, and I can’t seem to find a formula to store what information is kept.

I also don’t know how to merge both calculations of the ray information from both the light source and eye. Any information would be greatly appreciated! Or if anyone knows of a good website to refer to where I can learn, please let me know! Thanks in advance!

What do you think this has to do with OpenGL, which is rasterization API, not a ray-tracing API? Do you have an OpenGL-related question?

You’re much more likely to get responses to this question on a ray-tracing list or forum such as the PBRT discuss mailing list or Ompf forums. You’d do well to pick up a copy of the book Physically-based Rendering which describes ray tracing algorithms and techniques in detail.

The second edition of PBRT is due this summer, and will include info on bidirectional path tracing IIRC.

In the meantime you should read Veach’s thesis, which explains it in great detail, it is however somewhat technical. See http://graphics.stanford.edu/papers/veach_thesis/