View Full Version : Enabling Blend when rendering to a pbuffer

01-11-2003, 11:49 AM
Hi out there!

I've been coding with opengl for a long time, but this is my first dabble with pbuffers. My code freezes whenever I try to enable blending when rendering to a pbuffer using my radeon 9700. Does anybody know why? Is blending not supported with a pbuffer? Do I need to request "accumulator" bits when choosing the pixel format maybe?

thanks for the help!!!!

01-11-2003, 12:38 PM
It should work. If you can make it happen with a small program that you can e-mail, devrel@ati.com may be able to forward the problem to their developers. I've had good luck with that.

01-11-2003, 12:57 PM
Is it a floating point pbuffer or just a regular pbuffer ?

01-11-2003, 05:04 PM
Blending should work with a normal (fixed-point) pbuffer.

Floating-point color buffers under the NV_float_buffer extension do not support blending. ATI has recently published a draft of a similar similar extension (ATI_pixel_format_float), which apparently does support blending, though it might not be supported in hardware in their current generation of hardware. I don't know.

One other thing that might be going on: since your pbuffer may have a different pixel format, you might be required to create a separate context for the pbuffer. If you enable blending in your "window" context, it has no effect on your "pbuffer" context. And vice versa. Several developers that I've worked with ran into issues that turned out to be due to exactly that kind of problem.

01-13-2003, 01:08 PM
I am using a standard 8 bit buffer. When I am in the buffer context, and i try to enable blending, this is when it freezes. I enable blending for the screen context, but when I render to the pbuffer, no blending is done. Then, while IN the pbuffer context, I try to enable blending, and the code hangs. I though it was some type of acculator problem, like the pbuffer pixel format didn't have any accumulator bits defined. Any further suggestions?

01-13-2003, 03:34 PM
Sounds like either a driver bug or that it falls back into software mode which due to it's slowness might appear as it hanged.
Either way, send an email to devrel@ati.com about it and I'm sure they can help you.

01-13-2003, 03:51 PM
Originally posted by Humus:
Sounds like either a driver bug or that it falls back into software mode which due to it's slowness might appear as it hanged.
Either way, send an email to devrel@ati.com about it and I'm sure they can help you.

Yeah, the current drivers have some pretty bad problems with pbuffers (just the buffers them self, and sharing object lists across them). I emailed them the problems about pbuffers, but I haven't heard anything back about it yet.

01-13-2003, 06:08 PM
Sharing objects works fine with the 6275 driver.

01-14-2003, 11:30 AM
OK, so you guys are right, I got my versions mixed up. I can indeed blend to the 8-bit fixed point pbuffer. It is the floating point version that hangs. I still think that there's something weird going on with the floating point buffer. If I use say 3.123409, it will come back as 3.123383 when I glreadpixels the pbuffer, argh. Thanks to everybody who's helped thus far!

01-14-2003, 11:43 AM
So, if I use a floating point value of 3.123409 or any value with a decimal place in it, it will come back as some thing slightly different, like 3.123383. However, if I specify 2.0 or -3.0. Then I get back exactly 2.0 or 3.0 when I glreadpixels the pbuffer.

01-14-2003, 12:02 PM
If I'm not mistaken a normal IEEE 32bit float is only accurate to about 5 demical places. So the 24bit floats on the 9700 would then only be accurate to maybe 3 or 4 decimal places, so your results are quite expected.

01-14-2003, 01:49 PM
I agree, it would make sense, if it is indeed a 24bit floating point number. Where did you find that it's a 24 bit float? Most of the docs I read said it's 128 bits for RGBA, so 32 bits a piece. I can't remember where, but I also read somewhere that it was 24 bits each. Is there any concensus on the matter?

01-14-2003, 03:30 PM
Well, you have 32bit float storage of course, but through the pipeline you only have 24. That it's 24bit have been scattered all over the web in all kinds of reviews etc. so it's certainly no secret. This is one thing that nVidia makes a big deal about as their GFFX has 32bit floats.

01-15-2003, 01:57 PM
wow, you people rule! Thanx so much to everybody who's helped out! I've just been asked by devrel@ati to send in a snippet of my code, so I'll post the results when and if they can find out what the problem is.