internal data formats in openGL

Hi everybody !

I am doing volume rendering in openGL under windows using 3D textures. As I am rendering scientific data, I want to have the more details as possible and, as a consequence, I want to use lots of slices. But as I increase the number of slices, I calculate the slices color in such a way that the alpha value of each slice gets smaller and smaller in order to get the same global attenuation. The problem is that I reach a point where the alpha values are so small that they do not have any effect any more.

The only possibility that I see to solve my problem would be to compute the process with values coded on at least 2 bytes instead of only one. But I don’t know how to do that. If I set the internal format of the texture with ALPHA_16, instead of ALPHA_8 glGetTexLevelParameteriv(GL_TEXTURE_3D_EXT,0,GL_TEXTURE_ALPHA_SIZE,&val2) still give me 8 as result, while glGetTexLevelParameteriv(GL_TEXTURE_3D_EXT,0,GL_TEXTURE_INTERNAL_FORMAT,&val1) returns the same internal format as I asked for. Moreover, even if the texture would be coded on 16 bits instead of 8, I am not even sure that the data wouldn’t be resized on 1 byte during the openGL computation. I have tried to change the pixel format descriptor, or use glutInitDisplayString ,hoping that this would define the format used for calculations in openGL, but it seems that alpha size bigger than 8 bits is not supported.

In short, is the format used for openGL computation inherent to the OS or the hardware, and there’s no way to change it, or is it possible, and then, please tell me how ?

Thanks a lot.

Read the following discussion thread, it applies to your problem too.
http://www.opengl.org/discussion_boards/ubb/Forum3/HTML/009958.html

Actually, I am using a GeForce FX 5900 Ultra that has true 128-bit color precision (thanks to CineFX 2.0 Engine), according to http://www.nvidia.com/view.asp?PAGE=fx_5900

Instead of using some tricks in opengl, I would rather, if possible, use this 128-bit hardware feature, but I still don’t know how to do that. The rendering is currently definitely not computed using 32 bits per color chanel, so how can I “activate” it ? I hope that this is not only dedicated for floating point buffers that do not support blending.

Anybody knows how to solve my problem ?