Accumulation buffer's useless

Hi

I posted a similiar question in a begginers forum once but I didn’t get much of answer. Some time ago I tried to use accumulation buffer for some simple application. I just wanted to test how it works. There were only two calls to accum buffer. One GL_ACCUM and GL_RETURN. I don’t really know what I wanted to achive but I hoped I would see something interesting. I was not disapointed. What I saw was my tiny program running at 2 fps. Wooow! That’s was really good since effect made with accumulation buffer wasn’t even interesting.

My question is - is it me who screw it up or just it always works like this? If so accumulation buffer is USELESS.

Thanx

Orzech

If I recall correct, accumulation buffer is not HW accelerated by any existing cards yet.
There are many cool demos in sgi’s homebage of how to achieve nice effects with accumulation buffer. Just not in realtime.

Originally posted by blender:
If I recall correct, accumulation buffer is not HW accelerated by any existing cards yet.
There are many cool demos in sgi’s homebage of how to achieve nice effects with accumulation buffer. Just not in realtime.

It is true most consumer cards on PC’s don’t have hardware accelerated accumulation buffer, so using them will drop you down to software emulation which explain why things are so slow.

However, its false to assert that no existing cards support it. Professional cards on PC’s support often accumulation buffer in hardware, and you’ll find similar support on unix graphics machine such as from Sgi and other vendors.

Robert.

And the latest cards from nVidia and ATI should handle accum in hw as well ( cards that can handle floatingpoint buffers should be able to implement it)

I havent tried this on my Radeon yet ( i know, im lazy) but ive heard that its working…

my radeon 9500 has accumulation buffer support, even under Linux. It stomps all over my 4200Ti card in that respect.
It’s very sweet.

ATI’s DX9 cards support it in hardware. Nvidia’s do not (or if they do, it’s not fast).

– Zeno

Radeon 9500 and up, and allegedly GeForce FX and up, support hardware accumulation buffer. Others don’t (unless it’s a really expensive CAD-style card).

Thanks for your comments. I guess there is a bit of truth in saying that accumulation buffer is useless (at least in real-time). I wouldn’t have so much doubts about accumulation buffer but I’ve read once (somewhere in this forum) that some flight simulator use accum buffer for engine flames. It was quite interesting since this effect was really nice but this game was probably only a slide-show .

Thanks

Orzech

My FX5800U certainly doesn’t support hw accum. Then again, it seems to have suddenly stopped supporting FP properly and is subject to mood swings when it comes to VBO performance. I really can’t think of anything to blame but the drivers.

It should be easy to support accumulation buffer on any FP-buffer capable card. Maybe NVidia’s driver team didn’t think of it yet, or it has low priority on the to-do list.

I think someone from ATI said Radeons have always supported the accum in hw.

float textures can take over the job of accumulating (well, the spec needs an update and current GPUs can’t do it, but it will certainly become true later on).

[This message has been edited by V-man (edited 07-15-2003).]

Some of the posts mentioned Radeon dx9 card supports A-buffer with floating point buffer. But as I know, you can not do alpha blending to a floating point buffer right now. A-Buffer is to do ADD operation on a target buffer. So I wonder how do you use the A-buffer in hardware?

Thanks

Imagine you have two float buffers A and B, and one buffer you just rendered to. When you call glAccum(), the driver renders a full-screen quad to B with a small shader that uses A and the rendered buffer as input textures. The shader performs the desired operation. After that, A and B swap their roles and in case of the next glAccum(), A becomes the accumulation target.

In this case, it would even be possible to safely use one buffer as both texture and render target, because each pixel is never read again after being updated.

[This message has been edited by Xmas (edited 07-15-2003).]