my program has to draw clouds with impostors (sprites), so the sprite texture has to be rendered. There is a problem with this, as for a correct result, the color components of the fragments need a different blending function than the alpha component. The cloud itself consists of several thousands of particles, and for a correct visible result, the particle rendering blend func has to be
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA). But this causes the alpha component to be blended with this function as well, making the final destination alpha components of the texture fragments much too low(transparent). To get the texture with right color and alpha components, the blending func for the alpha has to be
glBlendFunc(GL_ONE, GL_ONE)
which causes the alpha values simply to be added, which is exactly right i think, because so, the more particles are rendered to a certain fragment, the less transparent it gets.
The only way I found to achieve this is rendering the impostor texture in two passes, first the color components and then in a second pass only the alpha component, with a changed blend an glColorMask(…).
This looks fine but needs two rendering passes… is there a way to do this with one pass? Am I missing something trivial? Or is this a “good” way to do this? Would a fragment program help?
Or use GL_SRC_ALPHA_SATURATE, GL_ONE and draw front to back. The result isn’t exactly the same as with the blending you suggest, the colors saturate a bit too quickly, but it was good enough for my clouds: http://www.hut.fi/~ikuusela/images/depthblur.jpg
Well there’s a demo on my computer, but it’s quite a mess, so I’m not sure if I want to release it. We’ll see, but it’ll take some work before I can release it.
I’m not using impostors or anything, just rendering all the clouds as one set of slices into a low resolution texture (128*128 in that image) and drawing that as a fullscreen quad. That’s why they’re a bit blurry, but to me they look surprisingly good even on very low resolutions.
thanks… the glBlendFuncSeparateEXT seems to be the right thing, but somehow it just causes the program to run extremely slow and crash… it’s a linux program, and I am using a gf4 ti 4200, so it should work !? Am i right with assuming that the PFNGLBLENDFUNCSEPARATEEXTPROC thing is windows-only and with linux I can just use the function (as it is possible with other extension like NV_point_sprite and multitexturing)? Does this function have to be enabled (the spec does not say anything abouht this)?
I’m not using impostors or anything, just rendering all the clouds as one set of slices into a low resolution texture
That’s an imposter
I ran into this problem of alpha and color needing separate blend functions as well. I used color mask and made two passes (one for color, one for alpha). I don’t think that SGI extension was supported when I was doing this. That was a couple of years ago, though.
I’m not using impostors or anything, just rendering all the clouds as one set of slices into a low resolution texture (128*128 in that image) and drawing that as a fullscreen quad. That’s why they’re a bit blurry, but to me they look surprisingly good even on very low resolutions.
Not sure to understand; why is the rendering to texture needed, can’t you render your slices directly onto the screen ?
Yeah they look good, and volumetric (which is the reason why they look good IMO), but there’s some bad aliasing on their contours. I wonder how they’d look with higher-res textures (say, 512x512?).
Actually they look damn good on 256*256 already. The reason I use R2T is that I’m drawing 200 or so slices, so rendering them directly to screen would eat up way too much fillrate. On resolutions that low you can draw practically all the slices you want.
The clouds consist of two 2d textures, first of which is a thickness map, comparable to the usual heightmaps. This one includes simple shading in the color channel.
The other one is a kind of a profile/detail map which is mapped orthogonal to the other one. It’s alpha is greatest in the middle and goes to zero on top and bottom. There’s also some noise in it. It’s a little darker on the bottom to make the bottoms of the clouds dark.
Then there’s some wicked combiner function to give them the correct appearance…
I believe that you’re rendering your clouds to a buffer and want to use that buffer as a texture later but your alpha channel is not calculated correctly.
If the problem is as described above, you can premultiply you color channels with the alpha channel and use the blending function: GL_ONE, GL_ONE_MINUS_SRC_ALPHA. Remember to blend you layers back to front
This will give you correct color and alpha channels in the destination buffer without using any extensions.
yes it really works… and drawing the sprites with only one pass gives a ca 10% speedup. now my program draws 100 clouds with 5000 particles each and still gets ca. 45 fps
Originally posted by Niels Husted Kjaer: If the problem is as described above, you can premultiply you color channels with the alpha channel and use the blending function: GL_ONE, GL_ONE_MINUS_SRC_ALPHA. Remember to blend you layers back to front
Exactly. No extensions needed.
Impostors?
[This message has been edited by tie (edited 05-27-2003).]
in general, it works with opacity weighted colours, but sometimes, the clouds appear to have “spots” or “measles”, namely when rather dark particles are hidden by rather light/white ones (the particles have different shades of grey, as a cloud has lighter and darker parts).
if I got it right, using opacity weighted colours mean multiplying the color components of each texel by the alpha component and blending with glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA), so mathematically, regarding the colours, it is EXACTLY the same as “standard” blending with glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA). So i do not understand why then it looks faulty under certain circumstances. Can the problem come from the fact that the particles have different colors, which are specified by glColor4f(r, g, b, 1.0)?