weird thing with glClearColor

Hello again,

second post today!

I think I am missing something. If you specify a glClearColor of (0.6f,1.f,1.f) and after rendering(even though I am not doing any rendering but clearing the buffer), pull out the pixels with glReadPixels. If you use unsigned Byte as the type, shouldnt a background pixel come back with r = 0.6*255 = 153?

I get 156 returned.

My question is, is there some features of opengl that might alter the value in the color buffer in this way?

If you run in 16 bit colormode ( ie windowed mode and 16bit desktop ) you cannot express all 256 steps, and opengl will try to choose the nearest similar color

hmmm…

2^8= 256. that is different from 255…
then 0.6*256= 153.6.

you may have some gl treatments and gl approx that moves a bit from the real result.

Mazy–switching to 32 bit worked…thanks…

but, I dont really understand fully what is going on.

the spec says that when you get a color back, it multiplies the internal floating point representation by a function depending on what type you are asking…

for unsigned byte, it is

c = (2^8-1)f

where the f is the floating pt value…that is why i multiplied by 255 and not 256

So, I am wondering at what point would the value get assigned to the nearest available color, and how can I account for this?

The framebuffer has the color depth you gave it (when you created the window). This depth cannot be higher than the one of your desktop. So in your case, with your 16-bit color desktop, you got a 16-bit framebuffer (I guess you are using cut-n-pasted code or some utility library for window setup).

When glClearColor is performed, bits are actually written in the framebuffer. The GL must convert the floating point value from the current state (.6,1.,1. in your case), so that it can be written in the framebuffer (which contains integer values).

When you read back a color from the framebuffer, you get the converted value. It is an integer value, not a float.

Your quote from the spec deals with getting the integer conversion of some floating point value from the current state.

If you’re using 16 bit color depth, you generally have 5 bits per channel, and maybe an extra bit for green (RGB656) or an alpha bit (RGBA5551). But sinde your 0.6 in glClearColor is for the red channel, both pixel formats has 5 bits for red.

Ok, with 5 bits for red, you have 32 different levels of red. 320.6 = 19.2. But sinde we’re dealing with integers, say 19 (using nearest rounding) will be written to the frame buffer. When reading back the result, OpenGL reads 19, and the maximum value is 31, since we have 32 levels starting with 0. So the corresponding floatingpoint value would be 19/31 = approx 0.613. Converting this value to unsigned char, with maximum value 255, gives us the value returned by glReadPixels; 2550.613 = approx 156.3, which is rounded to 156. This is where your 156 comes from.

This is the effect of limited precision in the frame buffer. You provide a number with high precision, but the limited precision in the frame buffer will round this number to the precision available.

[This message has been edited by Bob (edited 06-04-2003).]

Thanks for the quick responses…very helpful.

So as I understand, 1 way to get the correct value, would be to convert the value that I sent to glClearColor to get the nearest color, and use this color to compare with what is read back from the framebuffer.

I will try this out.

thanks again.