glcopytex2d alpha problem

Hello,

I have selected glcopytex2d over FBO for the moment as I have found that ATI displays glcopytex2d with destination alpha and Nvidia does not. Meanwhile the ATI implementation of FBO is buggy and as of Catalyst 8.4 on Linux, does not work.

So I am left falling back on glcopytex2d for both ATI and Nvidia.

Problem: Nvidia ignores the destination alpha on the framebuffer, leading to textures caked in black which should be alpha.

glClearColor ( 0.0, 0.0, 0.0, 0.0 );

glBindTexture(GL_TEXTURE_2D,tex);
glCopyTexImage2D(GL_TEXTURE_2D,0,GL_RGBA,0,0,64,64,0);

How do I solve that under Nvidia?

Thanks,

Cameron

Did you set alpha bits before you set pixel format?

Which alpha bits where?

The texture and gltexture2d are reading all 4 channels ok.

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, image);

I haven’t set-up PIXELFORMATDESCRIPTOR structure because I didn’t think I needed one.

I’m loading png files as textures and I can read the alpha channel and alpha blending just fine.

Again, the texture copy works fine on ATI.

Thanks for your help, all comments are appriciated.

You need to enable the alpha bits. It’s possible that the ATI drivers force an alpha channel, just like you can force an application to use multisampling through the driver. Even if you don’t have an alpha channel, blending will appear to work fine just as long as you don’t use GL_DST_ALPHA or GL_ONE_MINUS_DST_ALPHA.

Ok, that brings some clarity.

Although I’m unclear at what stage where I explicitly “enable the alpha bits”.

Thanks.

Alpha bits are selected in the PIXELFORMATDESCRIPTOR struct (if you are working with windows)
PIXELFORMATDESCRIPTOR pfd = {
sizeof(PIXELFORMATDESCRIPTOR), // size of this pfd
1, // version number
PFD_DRAW_TO_WINDOW | // support window
PFD_SUPPORT_OPENGL | // support OpenGL
PFD_DOUBLEBUFFER, // double buffered
PFD_TYPE_RGBA, // RGBA type
24, // 24-bit color depth
0, 0, 0, 0, 0, 0, // color bits ignored
0, // no alpha buffer <<------ Set here number of framebuffer’s alpha bits
0, // shift bit ignored
0, // no accumulation buffer
0, 0, 0, 0, // accum bits ignored
32, // 32-bit z-buffer
0, // no stencil buffer
0, // no auxiliary buffer
PFD_MAIN_PLANE, // main layer
0, // reserved
0, 0, 0 // layer masks ignored
};

Excellent. -I suspected this was a Windows dependancy.
Although the target is for Windows with Nvidia I see this issue under Linux too. Is PIXELFORMATDESCRIPTOR portable?

Otherwise thanks very much, I will see how this works and google from here.

I’m using SDL and I found the answer to my problems.

Below SDL_Init I had to set the attributes of my Alpha size.

SDL_Init(SDL_INIT_VIDEO);

SDL_GL_SetAttribute(SDL_GL_RED_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_GREEN_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_BLUE_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_ALPHA_SIZE, 8);

Hope this help someone in the future.