Anaglyphs with shader

hi. i just thought about if it’s possible to create an anaglyph vision with a shader.

for each frame that is finally displayed, there has to be made two shots of the scene, one with the camera slightly moved to the left, and one slightly moved to the right. each of the pictures should be color-filtered so that only the red, or in the other case, the cyan or green/blue values are visible. the pictures should be merged together (i used the accumulation buffer for this in opengl) and then displayed.

i always thought this wouldn’t work with a shader since the displacement caused by a camera decreases on distant objects and i thought the displacement with a shader always stays the same.

can you tell me if i’m wrong or right?
thx

I don’t think you could do this entirely with a shader. A fragment program could easily change your colors to red or blue, and a vertex program could shear your projection matrix to give a decent eyepoint offset.

However, you would still need to render everything twice, and that would have to happen in the application. And if you are already doing things at the application level, you may as well take care of your projection matrices there too.

each of the pictures should be color-filtered so that only the red, or in the other case, the cyan or green/blue values are visible. the pictures should be merged together (i used the accumulation buffer for this in opengl)
A much more efficient way is to simply use glColorMask, look at the very bottom of this page :
http://astronomy.swin.edu.au/~pbourke/opengl/redblue/

It works because the red/cyan channels are distinct.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.