Following scenario: I am rendering a lot of particles, some to be blended additively, some via alpha. Right now I have to change the blend function every time the particle type changes. My idea was to handle all that in a shader and use gl_Color.a to decide whether to additively (a == 0.0) or alpha blend (a != 0.0).
So I created a shader that would get the current scene, a particle and the scene’s depth buffer as textures and render them back to the current scene. The somewhat weird discovery I made is that I can use a FBO’s color buffer both as render target and use it as color source in a shader. I wonder whether this is allowed by the standard, but my current OpenGL driver doesn’t seem to have anything against it.
The idea is that the shader would accumulate the blended particles in the frame buffer as it worked through the particle list and used the desired blend function for each particle.
Here’s my fragment shader (the shader also does soft blending, but that’s secondary):
uniform sampler2D particleTex, sceneTex, depthTex;
uniform float dMax;
uniform vec2 windowScale;
// ZNEAR 1.0
// ZFAR 5000.0
// A 5001.0 //(ZNEAR + ZFAR)
// B 4999.0 //(ZNEAR - ZFAR)
// C 10000.0 //(2.0 * ZNEAR * ZFAR)
// D (NDC (Z) * B)
#define NDC(z) (2.0 * z - 1.0)
#define ZEYE(z) (10000.0 / (5001.0 - NDC (z) * 4999.0)) //(C / (A + D))
void main (void) {
vec2 sceneCoord = gl_FragCoord.xy * windowScale;
float dz = clamp (ZEYE (gl_FragCoord.z) - ZEYE (texture2D (depthTex, sceneCoord).r), 0.0, dMax);
dz = (dMax - dz) / dMax;
vec4 sceneColor = texture2D (sceneTex, sceneCoord);
vec4 particleColor = texture2D (particleTex, gl_TexCoord [0].xy) * gl_Color;
if (gl_Color.a == 0.0) //additive
gl_FragColor = vec4 (sceneColor.rgb + particleColor.rgb * dz, 1.0);
else //alpha
gl_FragColor = vec4 (mix (sceneColor.rgb, particleColor.rgb, particleColor.a * dz), 1.0);
}
The particle data (vertices, texture coordinates, color) are passed in client arrays.
What the shader does is to properly blend scene and particle fragment colors and alpha and assign these to the current fragment color. The blend function used is the replacement one (GL_ONE, GL_ZERO) to have the resulting color completely replace the frame buffer contents at the current fragment position.
I had hoped that assigning a color to gl_FragColor would update the render target, and when rendering the next particle in line, I’d be reading from the updated scene. Unfortunately this doesn’t seem to be the case.
This works, but not fully: Apparently the frame buffer is not updated immediately after having rendered one particle, and reading from it again for the next particle would already return the scene modified by the previous particle. Instead, the current scene + particle completely replace the previous state. So I can indeed blend a particle with the frame buffer and write that back to the frame buffer, but it doesn’t accumulate.
Is there a cure for this, or another method to improve batching for particles using the same texture, but different blend functions?