Problem with precision of frag shader calculation

Hi,
what is the expected value - on screen - of this frag shader code (this is coloring a quad and I have deleted irrelevant lines):
"#version 150
"
“out vec4 color;”
“void main() {”
“float colval=16380.0/16383.0;”
“color = vec4(colval,colval,colval,alpha);”
“}”;

whe I use glReadpixels to read the rendered value I get 1.000000. I need to go down to something like 16200.0/16383.0 for the output to be slighlty less than 1. (0.996 or so).
So I figure I have some sort of precision problem in the shader? Where should I be looking to identify the problem?
My framebuffer?

Thanks in advance,
Soren

Maybe this is because your render target is 8 bit per component. Then the float value is converted to a byte (mapping [0,1] to [0,255]) then written into the RT, so this explains the precision loss.

Yes you are right - thank you.
Queried the channel and it is 8 bits which I guess I should have know since 24 bit color seems standard.
The problem arose as I rescaled a uint to float and then reconverted to integers - will just render to my own FBO directly as integers in stead.

Thanks again!