Precision loss using GL_RGBA33F texture.

Hello,

I am working on a two pass algorithm during which I write into a GL_RGBA32F texture at the end of the first pass. I have noticed that when reading from that texture, both on the GPU during the second pass, and during a test readback to the CPU the most precise 12-bits of data are set to 0.

For example, if I output 5.09 (= 5.090000… = 0x3F024DD3) to all of the channels:

out vec4 fragColor;
void main() {
  fragColor = vec4(5.09);
}

and then read this back on the CPU:

float* bufferFloat = new float[1024*1024*4];
glGetTexImage(GL_TEXTURE_2D, 0, GL_RGBA, GL_FLOAT, bufferFloat)

I get value 0.50878906 (= 0x3F024000). The last 12-bits have been set to 0!

This all happens using regular OpenGL (no specific profile) on Windows 7 on a system with two NVidia Quadro 5000 cards with driver version 326.19.

Is there some precision setting that I’ve forgotten to set which makes this happen or is something wrong?

Kind regards,
Rick Appleton

While preparing a sample showcasing the problem I found the issue. The framework we are using silently created a GL_RGBA16F texture instead of a GL_RGBA32F texture.