Why glReadPixels always read alpha value as 255

When I do
glReadPixels(0, 0, nPicWidth, nPicHeight, GL_RGBA, GL_UNSIGNED_BYTE, pData);

The alpha components returned in the pData is always 255, why?

I even clear m_pData into 0s and call
glDrawPixels(nPicWidth, nPicHeight, GL_RGBA, GL_UNSIGNED_BYTE, m_pData);
before glReadPixels, but the result is still 255.

And I check my PIXELFORMATDESCRIPTOR: PFD_TYPE_RGBA, 32
I think it is correct, I even use glClearColor, glEnable(GL_ALPHA_TEST) to try it, the result always make me crazy.

Originally posted by linghuye:
[b]When I do
glReadPixels(0, 0, nPicWidth, nPicHeight, GL_RGBA, GL_UNSIGNED_BYTE, pData);

The alpha components returned in the pData is always 255, why?

I even clear m_pData into 0s and call
glDrawPixels(nPicWidth, nPicHeight, GL_RGBA, GL_UNSIGNED_BYTE, m_pData);
before glReadPixels, but the result is still 255.

And I check my PIXELFORMATDESCRIPTOR: PFD_TYPE_RGBA, 32
I think it is correct, I even use glClearColor, glEnable(GL_ALPHA_TEST) to try it, the result always make me crazy.[/b]
Check your alphabits, the colorbits tell you nothing about alpha:

typedef struct tagPIXELFORMATDESCRIPTOR { // pfd
WORD nSize;
WORD nVersion;
DWORD dwFlags;
BYTE iPixelType;
BYTE cColorBits;
BYTE cRedBits;
BYTE cRedShift;
BYTE cGreenBits;
BYTE cGreenShift;
BYTE cBlueBits;
BYTE cBlueShift;
BYTE cAlphaBits;
BYTE cAlphaShift;
BYTE cAccumBits;
BYTE cAccumRedBits;
BYTE cAccumGreenBits;
BYTE cAccumBlueBits;
BYTE cAccumAlphaBits;
BYTE cDepthBits;
BYTE cStencilBits;
BYTE cAuxBuffers;
BYTE iLayerType;
BYTE bReserved;
DWORD dwLayerMask;
DWORD dwVisibleMask;
DWORD dwDamageMask;
} PIXELFORMATDESCRIPTOR;
cColorBits
Specifies the number of color bitplanes in each color buffer. For RGBA pixel types, it is the size of the color buffer, excluding the alpha bitplanes. For color-index pixels, it is the size of the color-index buffer.
cAlphaBits
Specifies the number of alpha bitplanes in each RGBA color buffer. Alpha bitplanes are not supported.
cAlphaShift
Specifies the shift count for alpha bitplanes in each RGBA color buffer. Alpha bitplanes are not supported.
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/opengl/ntopnglr_73jm.asp

And nevermind the “alpha bitplanes are not supported” part, it refers to the software implementation, which doesn’t have retained alpha.

I’m probably just beating a dead horse here, but alpha testing has everything to do with writes TO the framebuffer, nothing with reads FROM the framebuffer.

To evanGLizr:
Thank you very much for your help, I set cAlphaBits to 8, and it works well now.

To T101
Thank you for your tip also.

Hi All,

Is it a problem if we have cColorBits = 32 and cAlphaBits = 8? We asked for 24 and 8 but the ChoosePixelFormat always return 32 and 8.

We can’t get the correct alpha values too.

Thanks,

Alberto

There’s nothing wrong with setting cColorBits = 32.

Set
BYTE cRedBits=8;
BYTE cGreenBits=8;
BYTE cBlueBits=8;
BYTE cAlphaBits=8;

and you should have a BGRA8 backbuffer.

Try to glDrawPixels and then glReadPixels.

Hi V-man,

We are currently using:

pfd.dwFlags = (int)(pixelFormat.DRAW_TO_WINDOW | pixelFormat.SUPPORT_OPENGL | pixelFormat.DOUBLEBUFFER);
pfd.PixelType = pixelTypes.TYPE_RGBA;

pfd.ColorBits = 24;
pfd.DepthBits = 24;
pfd.StencilBits = 8;
pfd.AlphaBits = 8;

The chosen pixelFormat is the following:

±—±-------±-------±-------±-------±-------±-------+
| ID | Color | Alpha | Accum | Depth | Stencil| AuxBuff|
±—±-------±-------±-------±-------±-------±-------+
| 2 | 32 | 8 | 0 | 24 | 8 | 0 |

And we get a gray background after using the glClearColor(0,0,0,0)

We tried also:

pfd.RedBits = 8;
pfd.GreenBits = 8;
pfd.BlueBits = 8;
pdf.AlphaBits = 8;

and the result is the same…

Any other idea? We are only trying to discover if the problem is in the chosen PixelFormat or in the glReadPixel side.

Thanks,

Alberto

You wrote “pfd.ColorBits = 24;”.

Did you tried it with “pfd.ColorBits = 32;”, too?

Hi Hampel,

We already get 32 bits for colorBits, why should we try to ask for it?

Thanks,

Alberto

And we get a gray background after using the glClearColor(0,0,0,0)
That says something is wrong already. It depends on your GL skill level. Either you know you are doing things right and you get good results with some machines, but you have this problem with 1 machine.
Or you are messing up and you should post a 100 line GLUT program here.

The pixelformat and your Windows code is not so much important here since you say you have chosen a pixelformat with 8 bit alpha.

Hi V-man,

My original question was:

To get correct alpha values from glReadPixel is it mandatory to have 24 cColorBits and 8 alphaBits or it would work also with 32 cColorBits and 8 cAlphaBits?

Nobody answered clearly to this question…

Thanks,

Alberto

If you are talking about the call to ChoosePixelFormat, then it doesn’t matter since that function picks the closest match.
Call DescribePixelFormat to see if you get an alpha.
If you have an alpha, DescribePixelFormat always says colorBits=32

You can even call glGetIntegerv(GL_ALPHA_BITS, …) to see what it returns.

Wondeful,

So the choosen pixel format is fine. Now we need to check why the glReadPixel does not read correctly the alpha values.

What about the background: if the ClearColor is 0,0,0,0 how can the BlendFunc multiply it?

something * zero always equal to zero

Should we change the BlendingFunc maybe?

Thanks again,

Alberto

Is that an advanced OpenGL question?

How should someone answer that if you do not describe exactly what you are doing?

Read the glBlendFunc manual for how writing to destination alpha depends on the different blending modes.

Just do as V-man said:
“Or you are messing up and you should post a 100 line GLUT program here.”

Recommended read: http://catb.org/~esr/faqs/smart-questions.html

Relic, V-man,

you are right, I ask you sorry.

Everything started from a small question that became a complex issue.

Here is the code (I omitted lighting to keep the code clearer)

gl.ClearColor(0, 0, 0, 0); 

gl.Viewport(0, 0, width, height);      
gl.Enable(gl.DEPTH_TEST);
gl.Clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);

gl.MatrixMode(gl.PROJECTION);
gl.LoadIdentity();

glu.Perspective(fieldOfViewAngle, viewportAspect, zNearPerspective, zFarPerspective);

gl.MatrixMode(gl.MODELVIEW);
gl.LoadIdentity();

gl.Scaled(scaleToOne, scaleToOne, scaleToOne);

gl.Rotated(Utility.RadToDeg(rotAngle), rotAxis.x, rotAxis.y, rotAxis.z);

gl.Rotated(-90, 1.0f, 0.0f, 0.0f);

DrawModel();

gl.Flush();

// renderingContext.SwapBuffers();

BitmapData bitmapData = bitmap.LockBits(rectangle, ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);

gl.ReadBuffer(gl.BACK);

gl.ReadPixels(0, 0, Width, Height, gl.BGRA, gl.UNSIGNED_BYTE, bitmapData.Scan0);

The result of this code is a perfect looking bitmap of the OpenGL scene with a gray background instead of a transparent one.

What do you think?

Thanks so much again,

Alberto

gl.ReadBuffer(gl.BACK);
would throw an invalid value on a single buffered pixelformat and being ignored. Doesn’t matter.

gl.Viewport(0, 0, width, height);
gl.ReadPixels(0, 0, Width, Height, gl.BGRA, gl.UNSIGNED_BYTE, bitmapData.Scan0);
width == Width and height == Height?

Otherwise looks ok.

The result of this code is a perfect looking bitmap of the OpenGL scene with a gray background instead of a transparent one.
How do you look at the bitmap image to say it returns a grey background?
In a Microsoft viewer application?

Some of these don’t handle 32 bit images like OpenGL would. Transparency actually means something for them and you’d look through the transparent pixels, whatever that is for the viewer, onto the windows default workspace color which might be your grey.

Does that final viewer image change if you invert the alpha in the clear: glClearColor(0,0,0,1)?

That would indicate there is a mismatch in the meaning of alpha data between OpenGL and your viewer then.

Hi Relic,

I use transparebt 32-bit daily and open then with a pro app like Photoshop. The probability that I am not seeing the tranparency is low.

It is interesting the result I get using glClearColor(0,0,0,1) instead: the background is now black and the transparent shadow of my model has a different color (a dark gray). With glClearColor(0,0,0,0) it was not visible at all.

A silly question: shouldn’t we enable blending before clearing the color buffer with a color with alpha value equal to zero?

Regarding gl.ReadBuffer(gl.BACK) I did update our PixelFormat setting in my previous post.

Thanks so much again,

Alberto

No, blending doesn’t effect a buffer clear.

You never need to call gl.ReadBuffer(gl.BACK) because when you create a double buffered window, the GL state is already GL_BACK.

You don’t even need to call glFlush before doing glReadPixels.

Do some debugging.
What happens if you just clear the buffer and read the pixels. Are the pixels alpha 0 or 255
How are you checking if it is 0 or 255?
You make a bitmap or you use a debugger?
Have you tried tracing the code?

Why are you making a bitmap and using photoshop? Have you considered that this may the problem?

Hey All,

I discovered the reason why alpha was not present!

I was copying the image on the Windows clipboard and pasting it inside photoshop and saw black/gray background.

BUT the Windows clipboard standard behaviour does not support alpha channel !!!

Saving the bitmap on disk, you get a perfect looking image!

Now I know that I need to use the CF_DIBV5 Windows clipboard format to see alpha channel with copy & paste, anybody knows how to use it?

Thanks to you all,

Alberto

Come on, search for CF_DIBV5 on http://www.msdn.com
The top two hits explain clipboard formats and operations.