alpha = 1.0?

Hi everybody!

I have a strange problem with my GLSL shader (GeForce 6xxx, Linux). I’m rendering to the frame buffer, writing computation results to all four components (RGBA). The image is then read back from the frame buffer using glCopyTexSubImage() into an RGBA texture. However, when accessing this texture as input texture from a shader in the next pass, any access to the alpha component always yields a constant 1.0. The texture was allocated as RGBA and i cannot think of any other place where one should indicate the texture format.

Any suggestions?

Joshua

So far I know, the framebuffer has no “room” for alpha values.
Typical framebuffer stores RGB(color) (and stencil bits if needed).
I can’t say anything to newer framebuffer objects or extensions.
I am not sure, but the alphavalue is most used as a blending value and will never saved in the frame buffer (its only used for calculating the new color). But you could use the depth buffer (or stencil buffer or other available buffers) to store your alpha values. Making a texture is then a bit more complex.

Nonsense. Of course you can have alpha in the framebuffer - it kind of helps when doing ‘alpha blending’.
joshua, are you using glut? If so, I believe there’s a glut constant for 32bits you should pass in when initialising (something like GLUT_RGBA).
Also, ensure your desktop is running in a 32bit mode…this may involve editing that config file X loads at startup…don’t know, maybe it’s more user friendly these days.

the following snippet creates a window, clears the buffer, reads a pixel and displays its rgb- and alpha-values.

note the “GLX_ALPHA_SIZE, 8” in the attribute array- it’s not enough to add “GLX_RGBA” to the attributes list. the visual id is displayed; type ‘glxinfo’ in the shell to see this visual’s capabilities (depth size, alpha size etc.)

(if you are using glut, i think you have to use “GLUT_WINDOW_ALPHA_SIZE”)

#include<stdio.h>
#include<stdlib.h>
#include<math.h>
#include<GL/glx.h>
#include<GL/glu.h>

Display                 *dpy    = XOpenDisplay(NULL);
Window                  root    = DefaultRootWindow(dpy);
GLint                   att[]   = { GLX_RGBA, GLX_DEPTH_SIZE, 24, GLX_DOUBLEBUFFER, GLX_ALPHA_SIZE, 8, None };
XVisualInfo             *vi     = glXChooseVisual(dpy, 0, att);
GLXContext              glc     = glXCreateContext(dpy, vi, NULL, GL_TRUE);
Colormap                cmap    = XCreateColormap(dpy, root, vi->visual, AllocNone);
int                     dep     = DefaultDepth(dpy, 0);
int                     cmask   = CWColormap | CWBorderPixel | CWEventMask;
XWindowAttributes       gwa;
Window                  win;

int main(int argc, char *argv[]){
 XSetWindowAttributes	swa;
 XEvent			xev;

 swa.colormap           = cmap;
 swa.border_pixel       = 0;
 swa.event_mask         = ExposureMask | KeyPressMask;
 win = XCreateWindow(dpy, root, 0, 0, 100, 100, 0, dep, InputOutput, vi->visual, cmask, &swa);
 XMapWindow(dpy, win);

 glXMakeCurrent(dpy, win, glc);
 glClearColor(0.00, 0.00, 0.60, 0.30);

 printf("
	VisualID = %p

", vi->visualid);

 while(true) {
        XNextEvent(dpy, &xev);
        
        if(xev.type == KeyPress) {
		glXMakeCurrent(dpy, None, NULL);
		glXDestroyContext(dpy, glc);

		XDestroyWindow(dpy, win);
		XCloseDisplay(dpy);

		exit(0); }

        if(xev.type == Expose) {
		//
		//	RESIZE VIEWPORT, CLEAR BUFFER
		//
 		XGetWindowAttributes(dpy, win, &gwa);
 		glViewport(0, 0, gwa.width, gwa.height);
 		glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
 		glXSwapBuffers(dpy, win); 
		//
		//	READ PIXEL VALUES
		//
		GLfloat	pixels[4];

		glReadBuffer(GL_FRONT);
		glReadPixels(10, 10, 1, 1, GL_RGBA, GL_FLOAT, pixels);

		printf("
	R= %7.3f, G= %7.3f, B= %7.3f, A= %7.3f

", pixels[0], pixels[1], pixels[2], pixels[3]); } } }
//
// 	gcc -o gl-alpha gl-alpha.cc -lX11 -lGL -lGLU 
//
  

Stupid, stupid me! First of all, thank you all very much for your very helpful replies! Despite all my debugging efforts, I never thought of the possibility that my environment might not provide an alpha buffer. I’m using Qt, and it turns out that I have to explicitly request an alpha buffer using the quite logically named QGLFormat::setAlpha(bool enable) :slight_smile:

It seems that after my bad experiences with early-version GLSL-enabled Linux drivers, I’m just too quick to blame everything on the shading language…

Again, thank you very much,
Joshua

you live- you learn :stuck_out_tongue:

Yes, textures aren’t my live, because I didn’t need them in my softwae, so I have to learn more of it.

My software is used in industrial environment (most older hardware), so I can only use core functionality of OpenGL 1.1 or OpenGL 1.2 .
Nevertheless I am trying to port code into GLSL, but there is another problem with drivers (e.g. in notebooks my users have…).

I can remember a lot of trouble with support of stencil buffers on older graphic cards (OK ATI and NVIDIA had it most of the time).
So I am not sure to use RGBA-modes without heavy testing.
On the other hand can somebody explain me some things.
1.) there are a lot of RGBA-modes possible (A2 A4 A8 A16 for example), how much do actual driver support?
2.) which version of OpenGL are needed to use this modes ( I found an entry, “Compressed alpha values may be supported only in extensions or OpenGL version greater than 1.3”).
3.) have graphic cards (OpenGL 2.0 support) suppoert of all RGBA modes or is this a driver issue?

Thanks all of you for the postings here, they are very interesting (I am using Qt too).

(Sorry is not a real GLSL problem so don’t hit me posting it here.)

Originally posted by Heady:
[b]
On the other hand can somebody explain me some things.
1.) there are a lot of RGBA-modes possible (A2 A4 A8 A16 for example), how much do actual driver support?
2.) which version of OpenGL are needed to use this modes ( I found an entry, “Compressed alpha values may be supported only in extensions or OpenGL version greater than 1.3”).
3.) have graphic cards (OpenGL 2.0 support) suppoert of all RGBA modes or is this a driver issue?

[/b]

  1. The usual is 32 bit RGBA mode. (8 bits each) some cards have (10/10/10/2) You query these modes from the OS on startup when setting the window. 32 bit RGBA is almost CERTAIN to be supported.

2)OpenGL 1.0 is all that is needed.

3)Destination framebuffer alpha has been supported in hardware since TNT 2 days.

The only thing to look out for is if you have a windowed application and the user is using a 16bit or lower desktop color mode. Then you can’t get a destination alpha unless you use off screen buffers. (fullscreen applications are not affected)

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.