When I do
glReadPixels(0, 0, nPicWidth, nPicHeight, GL_RGBA, GL_UNSIGNED_BYTE, pData);
The alpha components returned in the pData is always 255, why?
I even clear m_pData into 0s and call
glDrawPixels(nPicWidth, nPicHeight, GL_RGBA, GL_UNSIGNED_BYTE, m_pData);
before glReadPixels, but the result is still 255.
And I check my PIXELFORMATDESCRIPTOR: PFD_TYPE_RGBA, 32
I think it is correct, I even use glClearColor, glEnable(GL_ALPHA_TEST) to try it, the result always make me crazy.
Originally posted by linghuye:
[b]When I do
glReadPixels(0, 0, nPicWidth, nPicHeight, GL_RGBA, GL_UNSIGNED_BYTE, pData);
The alpha components returned in the pData is always 255, why?
I even clear m_pData into 0s and call
glDrawPixels(nPicWidth, nPicHeight, GL_RGBA, GL_UNSIGNED_BYTE, m_pData);
before glReadPixels, but the result is still 255.
And I check my PIXELFORMATDESCRIPTOR: PFD_TYPE_RGBA, 32
I think it is correct, I even use glClearColor, glEnable(GL_ALPHA_TEST) to try it, the result always make me crazy.[/b]
Check your alphabits, the colorbits tell you nothing about alpha:
typedef struct tagPIXELFORMATDESCRIPTOR { // pfd
WORD nSize;
WORD nVersion;
DWORD dwFlags;
BYTE iPixelType;
BYTE cColorBits;
BYTE cRedBits;
BYTE cRedShift;
BYTE cGreenBits;
BYTE cGreenShift;
BYTE cBlueBits;
BYTE cBlueShift; BYTE cAlphaBits; BYTE cAlphaShift;
BYTE cAccumBits;
BYTE cAccumRedBits;
BYTE cAccumGreenBits;
BYTE cAccumBlueBits;
BYTE cAccumAlphaBits;
BYTE cDepthBits;
BYTE cStencilBits;
BYTE cAuxBuffers;
BYTE iLayerType;
BYTE bReserved;
DWORD dwLayerMask;
DWORD dwVisibleMask;
DWORD dwDamageMask;
} PIXELFORMATDESCRIPTOR;
cColorBits
Specifies the number of color bitplanes in each color buffer. For RGBA pixel types, it is the size of the color buffer, excluding the alpha bitplanes. For color-index pixels, it is the size of the color-index buffer.
cAlphaBits
Specifies the number of alpha bitplanes in each RGBA color buffer. Alpha bitplanes are not supported.
cAlphaShift
Specifies the shift count for alpha bitplanes in each RGBA color buffer. Alpha bitplanes are not supported. http://msdn.microsoft.com/library/default.asp?url=/library/en-us/opengl/ntopnglr_73jm.asp
And nevermind the “alpha bitplanes are not supported” part, it refers to the software implementation, which doesn’t have retained alpha.
I’m probably just beating a dead horse here, but alpha testing has everything to do with writes TO the framebuffer, nothing with reads FROM the framebuffer.
And we get a gray background after using the glClearColor(0,0,0,0)
That says something is wrong already. It depends on your GL skill level. Either you know you are doing things right and you get good results with some machines, but you have this problem with 1 machine.
Or you are messing up and you should post a 100 line GLUT program here.
The pixelformat and your Windows code is not so much important here since you say you have chosen a pixelformat with 8 bit alpha.
To get correct alpha values from glReadPixel is it mandatory to have 24 cColorBits and 8 alphaBits or it would work also with 32 cColorBits and 8 cAlphaBits?
If you are talking about the call to ChoosePixelFormat, then it doesn’t matter since that function picks the closest match.
Call DescribePixelFormat to see if you get an alpha.
If you have an alpha, DescribePixelFormat always says colorBits=32
You can even call glGetIntegerv(GL_ALPHA_BITS, …) to see what it returns.
The result of this code is a perfect looking bitmap of the OpenGL scene with a gray background instead of a transparent one.
How do you look at the bitmap image to say it returns a grey background?
In a Microsoft viewer application?
Some of these don’t handle 32 bit images like OpenGL would. Transparency actually means something for them and you’d look through the transparent pixels, whatever that is for the viewer, onto the windows default workspace color which might be your grey.
Does that final viewer image change if you invert the alpha in the clear: glClearColor(0,0,0,1)?
That would indicate there is a mismatch in the meaning of alpha data between OpenGL and your viewer then.
I use transparebt 32-bit daily and open then with a pro app like Photoshop. The probability that I am not seeing the tranparency is low.
It is interesting the result I get using glClearColor(0,0,0,1) instead: the background is now black and the transparent shadow of my model has a different color (a dark gray). With glClearColor(0,0,0,0) it was not visible at all.
A silly question: shouldn’t we enable blending before clearing the color buffer with a color with alpha value equal to zero?
Regarding gl.ReadBuffer(gl.BACK) I did update our PixelFormat setting in my previous post.
You never need to call gl.ReadBuffer(gl.BACK) because when you create a double buffered window, the GL state is already GL_BACK.
You don’t even need to call glFlush before doing glReadPixels.
Do some debugging.
What happens if you just clear the buffer and read the pixels. Are the pixels alpha 0 or 255
How are you checking if it is 0 or 255?
You make a bitmap or you use a debugger?
Have you tried tracing the code?
Why are you making a bitmap and using photoshop? Have you considered that this may the problem?