Postprocess based Stereoscopic ?

we know traditional stereo rendering require render the scene twice,one is for left and the oher for right.
Recnetly I notice a new method called Postprocess based Stereoscopic. It use depth buffer and
color buffer to generate left and right image.
So,is anyone who know the detail ? or offer any technique paper ?
thank you!

I don’t do Stereoscopic rendering but have a look at the geometry shader with layers. You could emit one set of vertices for the left eye to layer 0; and then emit another set to layer 1 for the right eye.

Just render the scene twice. It’s the easiest way.

It depends on the hardware you’re targeting. From most widely supported to least:

  1. Rendering the scene twice will work on all hardware.
  2. Rendering the scene to separate layers using geometry shader (gl_Layer = 0 or 1).
  3. Rendering the scene side-by-side on the same layer using geometry shader + viewport/scissor arrays (gl_ViewportIndex = 0 or 1).

There’s also quad buffer stereo cards (such as NVidia Quadro) that could be used.

Any post-processing stereo will only be an approximation, since you won’t be able to generate stuff (correctly) that was hidden from the initial rendered viewpoint but should be visible to the faked viewpoint(s).