Alright, I’m using the backbuffer (assigning each object a color and using glreadpixels to get the color, and thus the ID for the object) method to select objects. In order to get around the problem of “non-exact” colors (1.0009 instead of 1.0, etc) I am using unsigned bytes. This code works perfectly when the color depth is set to 32 bit. When I set it to 16 bit, however, I have problems. Here is the code… then I’ll explain:
//Set up scene
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
glDisable(GL_TEXTURE_2D);
glDisable(GL_CULL_FACE);
//“1” is the ID of the button
glColor3ub(1, 0, 0);//Draw the button
glCallList(buttonList);
//Get the red color, thus finding the ID
GLubyte tempColor;
glReadPixels(X, Y, 1, 1, GL_RED, GL_UNSIGNED_BYTE, &tempColor);
Right now, I’m printing the tempColor to the screen for debugging purposes. In 32 bit depth, it’ll print out the correct values anywhere from 0 to 255 … exactly what it’s supposed to do.
In 16 bit depth, however, it will only print the the value if it is a multiple of 8. I.e. if I put glColorub(16, 0, 0)… tempColor will be 16. If I use any other numbers, it doesn’t work.
The funny thing is, I think this was working at one point. But I can’t say for sure… it’s been awhile since I’ve messed with this part of the code. Is there something special I should be doing for 16 bit depth? Is there a setting somewhere that may be affecting this? I’m at wit’s end here. Any help would be much appreciated
[This message has been edited by Cebu (edited 04-10-2001).]