Simple frame by frame antialiasing?

I’m wondering if there is a robust and relatively noninvasive way of averaging previous frames with the current frame, perhaps with a little jitter, so that you get a sort of psuedo antialiasing effect while sitting, and a motion blur effect when moving.

I think I read somewhere that the ShadowOfTheColossus game used a similar approach.

I tried glAccum() as the obvious place to start, but as usual it kills any program I’ve ever tried to use it with, and with any hardware throughout the years.

Logically it seems like glAccum would be the ideal avenue for this sort of stunt (if I understand the process correctly)

The idea of sort of reusing frames this way sounds like something you might think you would see in many games. Is there, or should there be, a draw back to such an approach, and how do people do it with opengl provided it actually is a sort of industry standard approach to frame rending?

My apologies for approaching this idea so lazily, but its really only a fleeting curiosity that I just happened to feel like trying out today.

-michael

Usefull on consoles, but not on PC. This method depends on framerate, it will vary on different hardware.

Anyway… you could try with FBO… Create two FBO’s. First contain accumulation of previous N frames and second contain new frame. When app finis rendering of new frame just add on it first FBO (as screen aligned texture mapped quad) with some blending. Render result on backbuffer and swap FBO’s.

http://www.flashbang.se/index.php?id=19
Just a little something i whipped up a year ago.
The main thing about using this method as motion blur or AA is that you have to try to control the rate at which older stuff disappears.
There is always the potential if you do it wrong that some rendered objects never really disappear in a timely fashion.
If i had unlimited graphics memory i would probably use like 5-10 different FBO’s and then sequentially switch render targets for every frame rendered, and finally combine them all in the back buffe using some kind of time based Gaussian sampling.

But this method works to

Yeah, you might want to assume a constant high framerate (it is a feature after all – though SOTC can be anything but)

Can anyone clear up that glAccum() does actually work on any cards/driver (in hardware)

Could there be anything I’m doing wrong? Maybe in setting up the rendering context? Or conflictual states elsewhere?

I seem to remember hearing that accumulation is done in software, and that it is likely to be layered on in Longs Peak.

Accumulation is not done in software. I was doing it just recently several times per frame with very high framerates (granted, I removed support for it because it still has a cost that I didn’t want to pay =). Either way, it’s been support by NVidia since at least the GeForce 6 (probably GeForceFX too) and supported by ATI since the Radeon 9xxx series.

Kevin B

hello michagl, how’s the progressive meshes going?

yes acculmulation is in hardware with the gffx

another method would be to do the average in a shader

or draw the new scene texture over the current scene eg glColor4f(1,1,1,0.95); (though u might see slight ghost trails esp with dark colors from memory), this is quite a cheesy effect anyways + not proper motion blur

Can anyone tell me why calling glAccum() would immediately kill a win app?

OFFTOPIC:

Originally posted by knackered:
hello michagl, how’s the progressive meshes going?
I kinda tucked it away and moved onto more relevant matters. It was a sort of vacation project from the beginning. But when the time is right I will pick it back up and integrate it into the main line of my work. Since then I’ve had the impulse on occasions to revisit it, but unfortunately I lost a harddrive and had forgotten that I hadn’t backedup the project’s mosaic database file anywhere else. The database was built by more or less random discovery of new mosaics. That was an interesting way of exploring the space, but before I will pick it up again, the first task will be to work out a means of regenerating and proving a complete database (the original was still incomplete)

I suppose I feel as if it was basicly ahead of its time… or perhaps rather the times are running behind. It’s not necessarily even a rendering algorithm, as much as it is a means of procedurally generating geometry, foremost perhaps suited for SSE type routines. I will probably need to partner with an experienced SSE programmer before that work can come into its own.

I can’t see why glAccum should kill an app, but maybe check that the pixel format has a non-zero number of accumulation bits?

If I add glAccum() to my code, when run I get an [OK] dialog box that just says “Runtime error!”, “abnormal program termination”.

I tried moving the glAccum() calls to the main executible. That gives me a second or two before the box pops up. Its hard to tell if the calls are doing anything though, because the framerate drops about 20 fold. So a second or two isn’t all that many frames.

I guess this is a geforce 6600GT card I’m still using.

Did you actually allocate an accumulation buffer when you created your rendering context, like Bruce said?