glReadPixels: more precision than GL_RED_BITS

Hi,

I drew some pixels with the color
glColor3ui(67108864, 0, 0)

GL_RED_BITS is 8.

When I call ::glReadPixels(MouseX, MouseY, 1, 1, GL_RGB, GL_UNSIGNED_INT, &Rgb) on that same pixel, I get:

(67372036, 0, 0)

Notes:

  1. 67108864== 1 << 26 which I believe should be perfectly representable with my 8 bits of red precision.
  2. glReadPixels is reading the same pixel I drew – this is not a Y = ScreenHeight - MouseY issue
  3. I disable these when I’m drawing: GL_BLEND, GL_DITHER, GL_FOG, GL_LIGHTING, GL_TEXTURE_1D, GL_TEXTURE_2D and I glShadeModel(GL_FLAT)

Questions:

  1. Why isn’t glReadPixels returning the same color I’m drawing?
  2. How is it that ReadPixels reports a 25 bit red color (67372036) when it supposedly supports only 8 bits of red color?

Thank you,

Chris

Let’s convert 16,777,216 which is a 32 bit value to a 8 bit value.
16,777,216 /(2^32-1) * 255 = you will get 0 or 1 depending on the precision of the GPU.

Convert that back to a 32 bit value
If 0, you get 0

If 1, you get
1 / 255 * 2^32-1 = 16,843,009

  1. Why isn’t glReadPixels returning the same color I’m drawing?
    Because you aren’t reading at the right place.
    Maybe you need
    glReadPixels(MouseX, WindowHeight - MouseY -1, 1, 1, GL_RGB, GL_UNSIGNED_INT, &Rgb)

You get 138,547,254 which isn’t even close.

  1. How is it that ReadPixels reports a 30 bit red color (138547254) when it supposedly supports only 8 bits of red color?

glReadPixels just converts the data from 32 bit to 8 bit using the CPU before doing an actual write operation.

First, I edited my post with some different values which I thought better reflected the problem. I’m sorry about that.

Second, I don’t believe you are correct about this:

16,777,216 /(2^32-1) * 255 = you will get 0 or 1 depending on the precision of the GPU.

glReadPixels(GL_UNSIGNED_INT) does not place the color value in the 8 LSBs are you propose, it places them in the 8 MSBs.

If you draw the color 16,777,216(unsigned int), glReadPixels(GL_UNSIGNED_INT) should return 16,777,216. This is because 16777216 can be represented perfectly using the 8 MSBs of a 32 bit unsigned int.

16777217 (unsigned int) on the other hand requires over 30 bits to represent. When I glReadPixels(GL_UNSIGNED_INT) this value, I would not expect to get back 16777217 – rather I will get something truncated so that the the 24 LSBs are zero which in this case would be the number 16777216.

Because you aren’t reading at the right place.

I can understand why you would think this is the case, especially if you believe that glReadPixels(GL_UNSIGNED_INT) returns numbers that use the 8 LSBs of the unsigned int, but I assure you that I am reading the correct pixel. The large absolute difference in the two numbers is because we are using the 8 MSBs – and one little change makes a large difference in the final value.

Thank you for your help in solving this!

Chris

glReadPixels(GL_UNSIGNED_INT) does not place the color value in the 8 LSBs are you propose, it places them in the 8 MSBs.

It does neither.

I’ll assume that you are using the default framebuffer (you never say). Therefore, your framebuffer has no more than 8 bits of red, and is an unsigned normalized format.

When you send an unsigned integer color to OpenGL via glColor*, OpenGL interprets this as unsigned normalized. That is, you want to set the color to a value between 0 and 1, and you are using integer values to represent that. The integer MAX_INT represents the value 1, and the integer 0 represents the value 0.

Therefore, the first thing OpenGL does is convert the integer you give it to a floating-point value. That value will be:

(67108864/(2^32-1)) ~= 0.015625.

Now, when OpenGL goes to write this color to the framebuffer (after whatever rasterization operations have happened), OpenGL realizes that the framebuffer’s values are stored as unsigned normalized 8-bit integers. Thus, it must first clamp and convert the floating-point value to an unsigned, normalized 8-bit value. This is done simply by multiplying the float by 2^8-1 and rounding the decimal:

round(0.015625 * 2^8-1) = 4.

When you go to read that value, first OpenGL must convert the 8-bit unsigned normalized value back into a floating-point value. This is done as above:

(4 / (2^8 - 1)) ~= 0.01568627.

After that, it must be converted into your destination format. Since your destination format was a normalized unsigned 32-bit integer, it must convert this float into a 32-bit unsigned integer:

(0.01568627 * (2^32 - 1)) = 67372036

It’s a simple matter of lack of precision. Just because you send a 32-bit number doesn’t mean it’s always a 32-bit number. Especially when your framebuffer only tells you that you have only 8-bits of precision.

If you are serious about getting accurate integer values in and out of images, you must create integral image formats, with the proper precision, and use FBOs to render to them. You could use GL_RUI32 with a renderbuffer, for example.

Alfonse:

Thank you for your explanation, it makes perfect sense.

Chris