I have an app that has an option to draw 3D fractals in stereo as red/cyan color anaglyphs. In a red/cyan anaglyph, you wear colored glasses that show red color values in one eye, and green/blue (cyan) color values in the other eye. Your brain integrates the image in 3D.
Currently it uses glColorMask to draw the left eye view in only red, and the right eye view in only cyan (green & blue.) This works fairly well, but suffers if the source image contains highly saturated reds, blues, greens, or cyans. With those colors, one eye sees the saturated color, but it appears black in the other eye because the color values for that eye are zero. There are also problems with ghosting depending on the colors used. Thus the user has to adjust the colors of the image to make an image look good as an anaglyph.
I found an article on the web that describes applying a different matrix transformation to the view for each eye that shifts the colors for the left eye towards red and the colors for the right eye towards cyan, and improves ghosting.
I modified my app to use glMatrixMode(GL_COLOR) to apply a matrix to the colors used for the left and right eye image. It didn’t work.
I found a post on these boards that says that the GL_COLOR matrix only works on pixels, not geometry.
I think the solution to this is to draw my left eye view into the color buffer in full color, then set the matrix mode to GL_COLOR, set the color matrix for the left eye, copy the color buffer to the accumulator buffer; draw my right eye view to the color buffer, set the matrix for the right eye, add the color buffer to the accumulation buffer, then copy the accumulation buffer back to the color buffer.
Am I right in assuming that the glAccum command uses the value in the GL_COLOR matrix to apply the accumulator to/from the color buffer?