Reading pixels from the back buffer

I am attempting to convert 32-bit pointers into rgba colors, draw objects to the back-buffer using the converted pointer as a color, retrieve the rgba value back from the back buffer, and convert this back to the pointer.

I am converting the 32-bit pointer into and unsigned byte rgba array. I have been inspecting the values as I set the color. For an example, I have a pointer that was converted to: r = 88, g = 255, b = 201, a = 16. I set this as the color and draw some objects. When I retrieve an rgba value from the back buffer, I get back correct results for the r, g and b values, but the alpha value keeps coming back as 255. Does anyone know what causes this?

Note that the above was done without GL_BLENDING enable. I did another experiment where I enabled GL_BLENDING and set the glBlendFunc as follows:
glBlendFunc(GL_ONE, GL_ZERO)
in order to guarantee that the my source color is what gets drawn to the back buffer. This, too, results in correct r, g and b values and 255 for the alpha. Any ideas?

Thanks!

Sounds like your framebuffer is only 24-bit (i.e. contains R, G, B but not A), so OpenGL is giving you back a default alpha. Have you checked this?

I checked that. The pixel format descriptor did have a 24-bit color depth. I changed this to 32-bit color depth (and checked the pfd that I set using SetPixelFormat)and am still getting 255 back as the alpha. Here’s the pfd that I’m using:

    sizeof(PIXELFORMATDESCRIPTOR),	// size of this pfd
    1,								// version number
    PFD_DRAW_TO_WINDOW |			// support window
    PFD_SUPPORT_OPENGL |			// support OpenGL
    PFD_DOUBLEBUFFER,				// double buffered
    PFD_TYPE_RGBA,                  // RGBA type
    32,                             // 32-bit color depth
    0, 0, 0, 0, 0, 0,               // color bits ignored
    0,                              // no alpha buffer
    0,                              // shift bit ignored
    0,                              // no accumulation buffer
    0, 0, 0, 0,                     // accum bits ignored
    32,                             // 32-bit z-buffer
    0,                              // no stencil buffer
    0,                              // no auxiliary buffer
    PFD_MAIN_PLANE,                 // main layer
    0,                              // reserved
    0, 0, 0                         // layer masks ignored

Is there something wrong with the pfd?

I took a look at the documentation on the PFD on windows. It says that the color-buffer depth should be set to the rgb depth, but not including the alpha band depth. This leads me to believe that 24-bit is the correct depth for the pfd.cColorBits member.

This is a little confusing because the pixel type specified by my pfd.iPixelType member is PFD_TYPE_RGBA. I had another application where I used a pfd with 24-bit color depth. This application used alpha-blending successfully which means that the alpha values are stored somewhere. I’m not sure where to go with this. If you have any info, I would appreciate it.

Originally posted by ben:
0, // no alpha buffer

Set this to 8, not 0.

Originally posted by Mike F:
Set this to 8, not 0.

Actually, I have already tried this. The pixel format that I request does have the alpha bits set to 8, however the pixel format that I get from the index returned by ChoosePixelFormat has the alpha buffer bits set to zero. I believe this means that my hardware does not support an alpha band.

I put some debug code in my app to check whether I am using the software-implemented OpenGL or if I am getting hardware acceleration. As it turns out, I am in software-implementation-mode (my graphics card has no acceleration). This means that I am stuck with the Windows implementation, which (as the MSDN says) does not support alpha bits.

This is confusing because I have another app that I run (with the same PFD) in which alpha-blending works fine. This leads me to believe that the software implementation is handling the alpha blending, but I’m not sure where it keeps the alpha info. Does anyone know what is happening?

Thanks.

i am curious, sir: what are you converting pointers to rgba and back for?
is this a debug aid?

Dolo//\ightY

I’m working on a couple of methods of object selection. One method recommended by the OpenGL redbook is to encode pointers to the objects as rgb or rgba colors and use this color to draw the object into the back buffer without swapping it to the front buffer. Then when the user clicks the mouse on a point, you can read a rectangular array of pixels from the back buffer (which now contains the encoded pointers), decode the rgb (or rgba) values back to the pointer and you have a handle to the selected object.

Currently my encoding converts pointers to rgb values, but I would like to have rgba values in order to decrease the likelihood that I run out of unique identifiers.

I’ve implemented the redbook, be warned I couldn’t get it to work outside of True Color mode on my machine.

I suspect the you might not be reading the correct type back in the glReadPixels statement make sure you have GL_RGBA set in the ReadPixels statement and then make sure you are using sufficient storage buffer, 4 bytes instead of 3 in the case of RGBA.

If you have any problems email me directly (openglmodeller@yahoo.com)

fs http://fshana.tripod.com