Alpha Buffer / Alpha Bits

How can I get more alpha bits? glGetIntegerv(GL_ALPHA_BITS) gives 8… need more. (Using Linux and GLUT)

I’m working a particle system. Given the nature of the project, I’m cheating by not making actual particles, but rather layering a bunch of textured quads on top of each other, where each quad is textured with an RGBA texture.

I have to use 128 layers of these images to achieve a dense enough particle field, and I need each particle to be barely transparent. In this way, solid white is only seen where there are a lot of particles.

The problem that arises is that I need to add up really really small alpha values. With 8 bits, I must use an alpha value that is larger than 0.00196078431 (=1/(2^8 -1 -1)), but I need to use smaller values than this, or I get rough edges in my particle field. A simple gradient ramp is really chunky right now. The following images should be fading from transparent to opaque.


…so, how do I set the alpha bits? And how is that related to BPP? And how are these related to OpenGL/Glut and possibly adding platform dependency?

Thanks!

Try floating point color buffers maybe? That’s the obvious solution. These are platform dependent solutions because you have to explicitly ask for a float color buffer through WGL_ARB_pixel_format_float or GLX_ARB_fbconfig_float. I think glut does not support float color buffers yet. Maybe if you render to a framebuffer object and copy to window frame buffer you can achieve the same thing? I think that you only need the extra precision during blending operations. This way you don’t need to specify a float window frame buffer and thus you can still use glut. Bear in mind that float blending is only supported in Geforce 6 GPUs and later (Not sure about ATI).

I apparently have been relying far too heavily on Glut. Do you know of a good Linux OpenGL tutorial that does not use glut? Google wasn’t much help. 'cause I have no idea how to implement your suggestion.

You can use Qt, SDL on linux to set up OpenGL, but if You do not want any of these in your project, there is alternative: GLX
(not portable - works only on linux/X11 and hard for beginners, but gives far more control over app than any other approach)

link:
http://sidvind.com/wiki/Xlib_and_GLX:_Part_1

You can also look on NeHe tutorials (maybe a bit outdated, but some of them have versions written without GLUT)

It would probably be easier to just create an RGBA16 Framebuffer Object and use that. No need to play with the initialization code or anything.

So I tried switching to SDL, and saw no improvement. I tried 16 v 32 bpp, and I tried SDL_GL_SetAttribute(SDL_GL_ALPHA_SIZE, various_values), but I kept getting exactly the same output. Probably because SDL does not list SGL_GL_ALPHA_SIZE in their documentation, I just hoped it would work.

I was trying to read up and implement a framebuffer, but the line: glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fbo); kept crashing. I was reading http://www.gamedev.net/reference/articles/article2331.asp

However, I then came across accumulation buffers. It seems as if this is exactly what I need. I’m reading the following, and hope to implement it tomorrow.
http://www.opengl.org/resources/code/samples/sig99/advanced99/notes/node115.html

Not 16 bits per pixel, but bits per component. Anyway, accumulation buffers are slow, outdated, and you can do the same thing with framebuffer objects plus system independently. Plus you can ask for floating point buffers, which is a bit better for precision, depending on the situation, always!
One line crashing is strange but predictable behavior. Did you generate your framebuffer before binding it? Did you initialize the function pointers properly(This is a classic!).
Actually, looking at your pictures again, I find it really strange that the kind of images you posted previously occur.If the alpha value is too small then the result is clamped to zero so completely black. If the alpha value is reasonably large then you can get at least 256 different values of white in potentia. But in your first picture there are only 3 shades of color. Why?

So I had 128 layers of images, so the observed opacity really becomes alpha*128. To achieve an output of 0:
0/128 = 0 so I would set each image to 0.

The following value is the largest value that gets clamped to 0:
1 / ((2^(8+1)) - 1 - 1) = 0.00196078431

Now multiply that by 128:
0.00196078431 * 128 = 0.250980392

So this means I cannot achieve the appearance of an alpha value were 0 < a <= 0.250980392

I did manage to get it working using an accumulation buffer, and I did not notice a speed problem.

I’m not really sure what you meant about system dependency. I simply enabled the accumulation buffer through OpenGL and passed in values. Do you have reason to think that I should switch from an accumulation buffer to a frame object?

It looks like an 8 bit gradient map, not 16 bit.

I think you are just layering/blending your quads wrong.

But let’s say you wanted to simulate 16bit banding vs 8bit banding, one of the most common tricks is to to blend over your gradient with a slight grain.

Wow, just a minute there! I don’t know if i got this right…Actually, there’s 256(0 included) values in an 8bit per component buffer, so in fact the largest possible value that can be represented without clamping is 1/(2^8-1)=0.0039…
But, actually I just now had a moment of glorious epiphany and thus understood the nature of your problem!:
In order to get full white, you need 128 * (20.0039) = 1.0 right?(Strictly speaking it’s 1.0 + 0.0039/2, but anyway…)
Which means that When your texture has a value of (2
0.0039) You get full white. However that leaves your texture with only 3 representable values without clamping intervening: 0, 0.0039 and 2*0.0039, since 0.0039 is the ‘quantum’ of value representable with an 8 bit precision texture. So, the 3 colours in your first picture are naturally explained!
Now, if you use a 16 bit buffer, you overcome the clamping that comes from blending but not the clamping that comes from the texture itself.
The solution? Use 16 bit per component framebuffers and textures!

[Edit] If accumulation buffers work for you use them. Now hardware can support them it seems. That wasn’t always the case though!

[Edit 2] I’m really curious if you used 16 bit textures in your second picture because I think that the problem described above still persists in a 16 bit buffer without 16 bit textures. Did you use another method?