PDA

View Full Version : PBuffers and WGL_ALPHA_BITS_ARB=16



chracatoa
05-28-2003, 12:29 PM
Ok, I have no idea why this is not working.
If I define WGL_ALPHA_BITS_ARB with 16, alpha blending does not work. If I define WGL_ALPHA_BITS_ARB with 8 bits alpha blending works.

I am basically drawing on the pbuffer lots of quads. Each quad has a splat texture binded to it.

Any ideas why? I have an ATI Radeon 9700 Pro, Windows 2000, Visual C++ using Glut and Glui.

That was the only way I could think of having a 16-bit alpha channel, but I can't make it work.

NitroGL
05-28-2003, 12:38 PM
Why do you need 16bits of dest alpha?

If you *REALLY* need this, you'll have to use a full 16/16/16/16 pixel format (64bits total, non-floating point), which the 9700 supports. But you won't get any blending support, so it's kind of useless in your case (if you're doing what I think you are).

chracatoa
05-28-2003, 12:44 PM
Yeah, I really need alpha blending. My program works fine if I render everything in software. However, I have so many layers upon layers of alphas that I need alpha blending with 16 bits. I can make it work with 8 bits but it doesn't look as good.

Also, I can live with 16/16/16/16, but, again, I *need* alpha blending working.

Humus
05-28-2003, 12:51 PM
Originally posted by NitroGL:
you'll have to use a full 16/16/16/16 pixel format

Though the driver should be able to select that for you even if you select 8/8/8/16.

NitroGL
05-28-2003, 03:03 PM
Originally posted by chracatoa:
Yeah, I really need alpha blending. My program works fine if I render everything in software. However, I have so many layers upon layers of alphas that I need alpha blending with 16 bits. I can make it work with 8 bits but it doesn't look as good.

Also, I can live with 16/16/16/16, but, again, I *need* alpha blending working.

Maybe you could pull it off with the accum buffer (which the 9700 also support)?

NitroGL
05-28-2003, 03:19 PM
Originally posted by Humus:
Though the driver should be able to select that for you even if you select 8/8/8/16.

Doing that just selects the 16/16/16/16 format.

chracatoa
05-28-2003, 05:49 PM
By the way why would they support a 16 bit alpha channel if I can't use it for blending? It doesn't make sense.

The accumulation buffer is not enough (I think) because I have many quads (that can be very small sometimes) and I would have to update the accum buffer for every quad. I don't think that would be efficient (but I may be wrong).

jwatte
05-28-2003, 06:07 PM
Alpha is useful for many things other than blending. Multi-texturing, and sourcing destination alpha when using the rendered surface as an input in the next pass, come to mind as examples.

The reason 16 bit formats don't blend is, I believe, that it's too hard to implement floating point per-fragment ops as close to the memory as the blending circuitry traditionally sits.

Yes, this means that HDR transparent surfaces is a pain.

NitroGL
05-28-2003, 06:11 PM
Originally posted by jwatte:
The reason 16 bit formats don't blend is, I believe, that it's too hard to implement floating point per-fragment ops as close to the memory as the blending circuitry traditionally sits.

Actually, on the 9700 you can set a 16/channel fixed point format (same as the RGB[A]16 internal format).

Humus
05-29-2003, 12:08 AM
Originally posted by NitroGL:
Doing that just selects the 16/16/16/16 format.

Yes, that's exactly what I said. http://192.48.159.181/discussion_boards/ubb/smile.gif

chracatoa
05-29-2003, 08:39 AM
Originally posted by NitroGL:
Actually, on the 9700 you can set a 16/channel fixed point format (same as the RGB[A]16 internal format).

yeah, so... why didn't they implement 16-bit alpha blending? Is it a driver problem or the card will never have this capability? :(

Humus
05-29-2003, 09:23 AM
It can only blend on 32bit or less buffers. It's the hardware, and most likely due to a narrow internal bus width somewhere.

jwatte
05-29-2003, 01:02 PM
It's very likely that blending is done VERY close to the memory, using VERY specialized hardware. Perhaps they only put in the hardware to do it for 8 bits, and not for 16 bits? That won't be affected by bus width, but purely by where they put the transistors.

Humus
05-30-2003, 03:36 AM
Blending works on RG16 though, but not on RGBA16. That made my conclusion about bus width, though my conclusion may be wrong of course.

chracatoa
05-30-2003, 07:47 AM
Originally posted by Humus:
Blending works on RG16 though,(...).

what do you mean by RG16? How do I set this mode? Shouldn't it be something like 'RGA16' since you need alpha for blending? I think I could live with a two-pass algorithm, I just need to know how to set it up.

Humus
05-30-2003, 09:09 AM
It's a texture and rendertarget format available in D3D. It's not implemented in OpenGL though. It has 16bits red and 16bits green. No blue or alpha. Thus a 32bit format. Blending doesn't neccesarily require alpha. There's plain additive or multiplicative blending etc. plus that alpha can be evaluated in the shader from any data passed to it and doesn't neccesarily need to comes from a texture.

Mazy
05-31-2003, 05:42 AM
sorry to be a bit OT, but

Ive have a clear memeory that i've read somewhere that its possible to create a PBuffer that doenst have a own 'openglcontext' that is, that when you make it current, it doesnt have its own set of states..


does anyone have a bit more information on this ? ( or should i just wait for überbuffers?

Humus
05-31-2003, 08:46 AM
Yes, it works so long as you're using the exact same pixel format. May be worth experimenting with until über_buffers comes around.

Andru
06-06-2003, 09:05 AM
Humus, (or anyone else) ...

I'm using a Radeon 9700 Pro (a 9800 is also available though I dont think that
would help much), I've been trying to get 16-bit blending in grayscale to work
for some time now. Seems hard though, maybe someone could give me some advice
here please...

I am now doing grayscale blending in 8-bit with GL_RGBA textures, so that
R=G=B=A (all components are equal). I would need more precision though, 16 bits
would suffice. I don't need colour information, only "intensity" that should be
blended "normally".

OpenGL has some pixel formats I found, like GL_ALPHA16, but I cannot seem to
find out how to set them up. They don't have to be visible as they are only
used for calculations, maybe I should try using a pBuffer (?) How do I set up a
buffer with other pixelformats than RGBA or INDEXed modes or can I use a window
in RGBA mode?

I also found these in glATI.h, would be very useful but... :-(

#define GL_ALPHA_FLOAT32_ATI 0x8816
#define GL_LUMINANCE_ALPHA_FLOAT32_ATI 0x8819

The question is: how do I do grayscale blending in 16-bits ? GL_ALPHA16 seems suitable. What kind of window should I initialize for that ?
I'm now using glutCreateWindow(GLUT_SINGLE | GLUT_RGBA | GLUT_ALPHA);

Thanks for any input,

Andru


[This message has been edited by Andru (edited 06-06-2003).]

Humus
06-06-2003, 11:50 AM
Well, you'll have to ditch glut and you'll have to use the WGL_ARB_pixel_format extension.