PBuffers and WGL_ALPHA_BITS_ARB=16

Ok, I have no idea why this is not working.
If I define WGL_ALPHA_BITS_ARB with 16, alpha blending does not work. If I define WGL_ALPHA_BITS_ARB with 8 bits alpha blending works.

I am basically drawing on the pbuffer lots of quads. Each quad has a splat texture binded to it.

Any ideas why? I have an ATI Radeon 9700 Pro, Windows 2000, Visual C++ using Glut and Glui.

That was the only way I could think of having a 16-bit alpha channel, but I can’t make it work.

Why do you need 16bits of dest alpha?

If you REALLY need this, you’ll have to use a full 16/16/16/16 pixel format (64bits total, non-floating point), which the 9700 supports. But you won’t get any blending support, so it’s kind of useless in your case (if you’re doing what I think you are).

Yeah, I really need alpha blending. My program works fine if I render everything in software. However, I have so many layers upon layers of alphas that I need alpha blending with 16 bits. I can make it work with 8 bits but it doesn’t look as good.

Also, I can live with 16/16/16/16, but, again, I need alpha blending working.

Originally posted by NitroGL:
you’ll have to use a full 16/16/16/16 pixel format

Though the driver should be able to select that for you even if you select 8/8/8/16.

Originally posted by chracatoa:
[b]Yeah, I really need alpha blending. My program works fine if I render everything in software. However, I have so many layers upon layers of alphas that I need alpha blending with 16 bits. I can make it work with 8 bits but it doesn’t look as good.

Also, I can live with 16/16/16/16, but, again, I need alpha blending working.[/b]

Maybe you could pull it off with the accum buffer (which the 9700 also support)?

Originally posted by Humus:
Though the driver should be able to select that for you even if you select 8/8/8/16.

Doing that just selects the 16/16/16/16 format.

By the way why would they support a 16 bit alpha channel if I can’t use it for blending? It doesn’t make sense.

The accumulation buffer is not enough (I think) because I have many quads (that can be very small sometimes) and I would have to update the accum buffer for every quad. I don’t think that would be efficient (but I may be wrong).

Alpha is useful for many things other than blending. Multi-texturing, and sourcing destination alpha when using the rendered surface as an input in the next pass, come to mind as examples.

The reason 16 bit formats don’t blend is, I believe, that it’s too hard to implement floating point per-fragment ops as close to the memory as the blending circuitry traditionally sits.

Yes, this means that HDR transparent surfaces is a pain.

Originally posted by jwatte:
The reason 16 bit formats don’t blend is, I believe, that it’s too hard to implement floating point per-fragment ops as close to the memory as the blending circuitry traditionally sits.

Actually, on the 9700 you can set a 16/channel fixed point format (same as the RGB[A]16 internal format).

Originally posted by NitroGL:
Doing that just selects the 16/16/16/16 format.

Yes, that’s exactly what I said.

Originally posted by NitroGL:
Actually, on the 9700 you can set a 16/channel fixed point format (same as the RGB[A]16 internal format).

yeah, so… why didn’t they implement 16-bit alpha blending? Is it a driver problem or the card will never have this capability? :frowning:

It can only blend on 32bit or less buffers. It’s the hardware, and most likely due to a narrow internal bus width somewhere.

It’s very likely that blending is done VERY close to the memory, using VERY specialized hardware. Perhaps they only put in the hardware to do it for 8 bits, and not for 16 bits? That won’t be affected by bus width, but purely by where they put the transistors.

Blending works on RG16 though, but not on RGBA16. That made my conclusion about bus width, though my conclusion may be wrong of course.

Originally posted by Humus:
Blending works on RG16 though,(…).

what do you mean by RG16? How do I set this mode? Shouldn’t it be something like ‘RGA16’ since you need alpha for blending? I think I could live with a two-pass algorithm, I just need to know how to set it up.

It’s a texture and rendertarget format available in D3D. It’s not implemented in OpenGL though. It has 16bits red and 16bits green. No blue or alpha. Thus a 32bit format. Blending doesn’t neccesarily require alpha. There’s plain additive or multiplicative blending etc. plus that alpha can be evaluated in the shader from any data passed to it and doesn’t neccesarily need to comes from a texture.

sorry to be a bit OT, but

Ive have a clear memeory that i’ve read somewhere that its possible to create a PBuffer that doenst have a own ‘openglcontext’ that is, that when you make it current, it doesnt have its own set of states…

does anyone have a bit more information on this ? ( or should i just wait for überbuffers?

Yes, it works so long as you’re using the exact same pixel format. May be worth experimenting with until über_buffers comes around.

Humus, (or anyone else) …

I’m using a Radeon 9700 Pro (a 9800 is also available though I dont think that
would help much), I’ve been trying to get 16-bit blending in grayscale to work
for some time now. Seems hard though, maybe someone could give me some advice
here please…

I am now doing grayscale blending in 8-bit with GL_RGBA textures, so that
R=G=B=A (all components are equal). I would need more precision though, 16 bits
would suffice. I don’t need colour information, only “intensity” that should be
blended “normally”.

OpenGL has some pixel formats I found, like GL_ALPHA16, but I cannot seem to
find out how to set them up. They don’t have to be visible as they are only
used for calculations, maybe I should try using a pBuffer (?) How do I set up a
buffer with other pixelformats than RGBA or INDEXed modes or can I use a window
in RGBA mode?

I also found these in glATI.h, would be very useful but… :frowning:

#define GL_ALPHA_FLOAT32_ATI 0x8816
#define GL_LUMINANCE_ALPHA_FLOAT32_ATI 0x8819

The question is: how do I do grayscale blending in 16-bits ? GL_ALPHA16 seems suitable. What kind of window should I initialize for that ?
I’m now using glutCreateWindow(GLUT_SINGLE | GLUT_RGBA | GLUT_ALPHA);

Thanks for any input,

Andru

[This message has been edited by Andru (edited 06-06-2003).]

Well, you’ll have to ditch glut and you’ll have to use the WGL_ARB_pixel_format extension.