Hello OpenGL gurus,
Suppose I need to render the following scene:
Two cubes, one yellow, another red.
The red cube needs to ‘glow’ with red light, the yellow one does not glow.
The cubes are rotating around the common center of gravity.
The camera is positioned in such a way that when the red, glowing cube is close to the camera, it partially obstructs the yellow cube, and when the yellow cube is close to the camera, it partially obstructs the red, glowing one.
If not for the glow, the scene would be trivial to render. With the glow, I can see at least 2 ways of rendering it:
####### WAY 1 ###########
- Render the yellow cube to the screen.
- Compute where the red cube will end up on the screen (easy, we have the vertices +the model view matrix), so render it to an off-screen FBO just big enough (leave margins for the glow); make sure to save the Depths to a texture.
- Post-process the FBO and make the glow.
- Now the hard part: merge the FBO with the screen. We need to take into account the Depths (which we have stored in a texture) so looks like we need to do the following:
a) render a quad , textured with the FBO’s color attachment.
b) set up the ModelView matrix appropriately ( we need to move the texture by some vector because we intentionally rendered the red cube to a smaller than the screen FBO in step 2 (for speed reasons!))
c) in the ‘merging’ fragment shader, we need to write the gl_FragDepth from FBO’s Depth attachment texture (and not from FragCoord.z)
####### WAY2 ###########
- Render both cubes to a off-screen FBO; set up stencil so that the unobstructed part of the red cube is marked with 1’s.
- Post-process the FBO so that the marked area gets blurred and blend this to make the glow
- Blit the FBO to the screen
#######################
WAY 1 works, but major problem with it is speed, namely step 4c. Writing to gl_FragDepth in fragment shader disables the early z-test.
WAY 2 also kind of works, and looks like it should be much faster, but it does not give 100% correct results.
The problem is when the red cube is partially obstructed by the yellow one, pixels of the red cube that are close to the yellow one get ‘yellowish’ when we blur them, i.e. the closer, yellow cube ‘creeps’ into the glow.
I guess I could kind of remedy the above problem by, when I am blurring, stop blurring when the pixels I am reading suddenly decrease in Depth (means we just jumped from a further object to a closer one) but that would mean twice as many texture accesses when blurring (in addition to fetching the COLOR texture we need to keep fetching the DEPTH texture), and a conditional statement in the blurring fragment shader. I haven’t tried, but I am not convinced it would be any faster than WAY 1, and even that wouldn’t give 100% correct results (the red pixels close to the border with the yellow cube would be only influenced by the visible part of the red cube, rather than the whole (-blurRadius,+blurRadius) area so in this area the glow would not be 100% the same).
Would anyone have suggestions how to best implement such ‘per-object post-processing’ ?