glReadPixels type

I would expect the following to give me same result

float pixelf[3];
glReadPixels(x, y, 1, 1, GL_RGB, GL_FLOAT, pixelf);

GLuint pixelui[3];
glReadPixels(x, y, 1, 1, GL_RGB, GL_UNSIGNED_INT, pixelui);

unsigned char pixelub[3];
glReadPixels(x, y, 1, 1, GL_RGB, GL_UNSIGNED_BYTE, pixelub);

cout << "r: " << pixelf[0] << " g: " << pixelf[1] << " b: " << pixelf[2] << endl;

cout << "r: " << pixelui[0] << " g: " << pixelui[1] << " b: " << pixelui[2] << endl;

cout << "r: " << pixelub[0] << " g: " << pixelub[1] << " b: " << pixelub[2] << endl;

however, my result is this:

r: 0 g: 1 b: 0
r: 0 g: 4294967295 b: 0
r: g: � b:

the pixel is green, so the float result is correct (0,1,0). Why are the other results so crazy!?

Thanks,
Dave

They are not crazy. Floating point number are normalized in the range [0 1]. For unsigned int [0 2^32-1] => 2^32-1 = 4294967295. Finnaly, for unsigned byte [0 255], I suspect that the glyph � is equivalent as 2^8-1 = 255.

Edit: After testing, the character &#65533 = 255.

Colors are typically normalized: 0-1 for floats and doubles.
0 to max integer value for integer formats.
Read the OpenGL specification for details.

So results are perfectly logical :
1.0f is the largest normalized value for floats
4294967295 is the highest number for 32 bit unsigned integer.
For unsigned bytes, you a seeing characters according to ASCII char tables instead of numbers :wink:
Try the good old :
printf("r: %u g: %u b: %u
", pixelub[0], pixelub[1], pixelub[2]);

you should see 255.

ah, got it. I thought cout would have converted the number to something readable. Also, is a char the same as a byte? ie I declared an unsigned char array and told opengl to give me back unsigned bytes - is that ok?

Thanks for the help!

Dave

In C/C++ for a x86 cpu, the size for a char is 8 bits => a byte. At the start of the gl.h header I have this:


typedef unsigned int GLenum;
typedef unsigned char GLboolean;
typedef unsigned int GLbitfield;
typedef signed char GLbyte;
typedef short GLshort;
typedef int GLint;
typedef int GLsizei;
typedef unsigned char GLubyte;
typedef unsigned short GLushort;
typedef unsigned int GLuint;
typedef float GLfloat;
typedef float GLclampf;
typedef double GLdouble;
typedef double GLclampd;
typedef void GLvoid;

When you pass or get data to OpenGL functions, its better to use the predefined typedef (GLubyte, GLint, etc). Maybe a different processor architecture will have a different typedef set for those.So this can ease to port application.

Also, is a char the same as a byte? ie I declared an unsigned char array and told opengl to give me back unsigned bytes - is that ok?

So to answer your question, for a x86 cpu is yes, for another architecture maybe not.

float pixelf[3];
glReadPixels(x, y, 1, 1, GL_RGB, GL_FLOAT, pixelf);

GLuint pixelui[3];
glReadPixels(x, y, 1, 1, GL_RGB, GL_UNSIGNED_INT, pixelui);

unsigned char pixelub[3];
glReadPixels(x, y, 1, 1, GL_RGB, GL_UNSIGNED_BYTE, pixelub);

Thats why you should use GL types 24/7. OpenGL typedefed them not for fun.
float - GLfloat
GLuint - ok
unsigned char - GLubyte

got it - thanks everyone!