I am blending 256 CT slice images to generate a DRR image ( a simulated x-ray image). The origianl CT slice images are texturemapped and blended to the frame buffer one by one. Following is the code that I am using for blending:
glEnable(GL_BLEND);
glBlendEquation(GL_FUNC_ADD);
glBlendFunc(GL_CONSTANT_ALPHA_EXT, GL_ONE);
g_reg.glBlendColorEXT(1.f, 1.f, 1.f, 2.0/256);
The code multiplies the slice textures with 2.0/256 and add the result to the frame buffer. The code works to some degree. As far as I know the frame buffer has four color channels and each channel is 8 bit. I don’t understand how the code works. If a 8 bit number is multiplied by 2/256, the result number is then eigher 0 or 1 ( after shifted by 7 bits to the right ). Is the blending operation floating? Or is the hardware have more than 8 bit pixel depth? I am using NVidia FX5200. How the blending works internally? ???