blending operation resolution

I am blending 256 CT slice images to generate a DRR image ( a simulated x-ray image). The origianl CT slice images are texturemapped and blended to the frame buffer one by one. Following is the code that I am using for blending:

glEnable(GL_BLEND);
glBlendEquation(GL_FUNC_ADD);
glBlendFunc(GL_CONSTANT_ALPHA_EXT, GL_ONE);
g_reg.glBlendColorEXT(1.f, 1.f, 1.f, 2.0/256);

The code multiplies the slice textures with 2.0/256 and add the result to the frame buffer. The code works to some degree. As far as I know the frame buffer has four color channels and each channel is 8 bit. I don’t understand how the code works. If a 8 bit number is multiplied by 2/256, the result number is then eigher 0 or 1 ( after shifted by 7 bits to the right ). Is the blending operation floating? Or is the hardware have more than 8 bit pixel depth? I am using NVidia FX5200. How the blending works internally? ???

The OpenGL specification says that blending is “effectively” carried out in floating point, but most current hardware implements it using fixed point math. The number of bits used in the multipliers varies across different hardware.

http://www.opengl.org/documentation/spec…000000000000000

In your case it sounds like the destination frame buffer is only 8-bits, so I wouldn’t expect to get good results adding together images that are multiplied by 2.0 / 256.0.

FYI, the GeForce 6 series does support actual floating point blending to fp pbuffers.