Part of the Khronos Group

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 4 of 4

Thread: Postprocess based Stereoscopic ?

  1. #1
    Newbie Newbie
    Join Date
    Jan 2013

    Question Postprocess based Stereoscopic ?

    we know traditional stereo rendering require render the scene twice,one is for left and the oher for right.
    Recnetly I notice a new method called Postprocess based Stereoscopic. It use depth buffer and
    color buffer to generate left and right image.
    So,is anyone who know the detail ? or offer any technique paper ?
    thank you!

  2. #2
    Senior Member OpenGL Pro
    Join Date
    Jan 2012
    I don't do Stereoscopic rendering but have a look at the geometry shader with layers. You could emit one set of vertices for the left eye to layer 0; and then emit another set to layer 1 for the right eye.

  3. #3
    Junior Member Regular Contributor
    Join Date
    Dec 2007
    Just render the scene twice. It's the easiest way.

  4. #4
    Member Regular Contributor
    Join Date
    Aug 2008
    It depends on the hardware you're targeting. From most widely supported to least:
    1) Rendering the scene twice will work on all hardware.
    2) Rendering the scene to separate layers using geometry shader (gl_Layer = 0 or 1).
    3) Rendering the scene side-by-side on the same layer using geometry shader + viewport/scissor arrays (gl_ViewportIndex = 0 or 1).

    There's also quad buffer stereo cards (such as NVidia Quadro) that could be used.

    Any post-processing stereo will only be an approximation, since you won't be able to generate stuff (correctly) that was hidden from the initial rendered viewpoint but should be visible to the faked viewpoint(s).
    Last edited by Dan Bartlett; 01-28-2013 at 06:49 AM.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts