best blending mode for particles?

I have particles with black (0,0,0) background and the particles are white so I can colorize them with glcolor4f() however I want the alpha value to make them less transparent or more, depending on the particle life (so when they are old they get faded out and erased) but I cant find a blending mode that lets me do this! all the modes I tried would look okey if alpha was 0.0 but when raised up the black color appeared all arround the particle!

i want to be able to make my particles more or less transparent with out seeing the black background they have is this possible with glblendfunc() ?

GL_SRC_ALPHA, GL_ONE for additive blending
GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA for real transparency
GL_ONE, GL_ONE for additive blending without alpha affecting the result

Jan.

glBlendFunc(GL_SRC_COLOR, GL_ONE_MINUS_SRC_COLOR);

Then do:

Ratio=(float)RemainingTimeForParticle/LifeSpanOfParticle;

glColor3f(Ratio, Ratio, Ratio);

Note that for real blending you should draw the objects from back to front(Relative to the viewr). Disabling the depth test isn’t useful in a full-blown game.
-Ehsan-

Don’t disable the test just disable writing to the depth buffer.

Originally posted by Omaha:
[b]glBlendFunc(GL_SRC_COLOR, GL_ONE_MINUS_SRC_COLOR);

Then do:

Ratio=(float)RemainingTimeForParticle/LifeSpanOfParticle;

glColor3f(Ratio, Ratio, Ratio);[/b]
The way i stated above is nicer, because you can still use the color.

glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);

Then do:

Ratio=(float)RemainingTimeForParticle/LifeSpanOfParticle;

glColor4f(Red, Green, Blue, Ratio);

But most particle-systems use additive blending.

Jan.

The problem with having specific “additive” and “modulate” modes is that you cannot blend between them… eg an additive fire particle that turns into an opaque smoke particle…

If you use glsl you can get around this by modifying the alpha value in a fragment shader.

First look at the 2 functions

Key:
cs = src color
as = src alpha
cd = dst color

Additive:
cd = cs * as + cd

Modulate:
cd = cs * as + cd * (1 - as)

note there is very little difference between these modes …

So in your program set this blendmode:
cd = cs + cd * (1 - as) … which in gl is:
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);

Then when you draw a particle you need to pass a blend ratio value to your shader (send it as an extra texcoord or whatever)

Now in your shader you have:
cs = incoming color
as = incoming alpha
br = incoming blend ratio … 0(modulate) -> 1(additive)

so your fragment program sets the color output to …
cs = cs * as;
as = lerp(as, 1 - as, br)

Remember the blend operation happens after the shader, so this output goes into the blendfunc as [cs] and [as].

So when we get to our blend func:

cd = cs + cd * (1 - as)

We have already handled the blending of cs by the original alpha value, and we have modified that alpha value (by the blend ratio) so that the contribution of cd is variable (ie can be modulate or additive)

Now you can control the blendmode without having to change state all the time, and you can have a smooth transition betweem additive and modulate modes.

Originally posted by Omaha:
Don’t disable the test just disable writing to the depth buffer.
Well. It is useful in most of the situations.But as an example when we use from the following blending function:
glBlendFunc( GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA );
we should draw the objects from far to near. I had seen this comment in the NeHe articles before. i didn’t know its meaning until i experimented the results.I have seen some games that use from the alpha testing and then use from the blending. But when i rotate the camera i see the different results.The reson is hat they don’t use from the algorithms to draw them from far to near.
I wanted to give an example, but i’m really tired and i couldn’t.When i feel better, i’ll again come to the message board and use from this excellent environment.
-Ehsan-

The default function for blending is

BlendFunc (SRC_ALPHA, ONE_MINUS_SRC_ALPHA)

But as Ehsan said, you’ll need to sort your particles. There’s other solutions like doing that with multipass that might be best for particle systems because sorting thousands of entities is really a bottleneck (even if using a quick sort).

There were quite recent posts about multipass if you want that but I’m not able to tell you more…

Hope that helps.

Well, yes, let’s say x and y are fragments. x is the incoming fragment, y is the existing corresponding pixel in the framebuffer. Let r be our alpha value. (Why r? On paper I used it for “rate”)

Therefore your resultant color (or one component) would be:

rx+(1-r)y=rx-ry+y

However if you now say that x is the existing pixel and y is the new fragment (IOW they both represent the same colors as before, except now we are drawing them in the reverse oder) the result is:

ry+(1-r)x=ry-rx+x

So, if we assume the results are the same, then:

rx-ry+y=ry-rx+x
…algebra…
2rx-2ry+y-x=0

So yes, you will usually get different results depending on your rendering order (these calculations were assuming a GL_, GL_ONE_MINUS blending function combination.)

But will you get drastically different results that will break the user’s (or player’s, for a game) perception of what you are trying to simulate?