Motion Blur

wow, what an interesting topic. Unless i have been in a coma lately, there is STILL no graphics card that supports the Accumluation Buffer to implement motion blur. so anyone have another solution? i read some past posts about it and people were thinking the voodoo line would offer it, but 3DFX crashed and since then NVIDIA hasnt done much with it. anyone have any links on some sample code? thanks.

ATI’s Radeon supports the accumulation buffer in HW since the R300-core (Radeon9500/9700) and i’ve already tried using it for motion blur and found it fast enough for use in a game.Don’t know about the GeForceFX-series though,but heard that it also supports it in HW with the newest drivers.

[This message has been edited by PanzerSchreck (edited 10-27-2003).]

You can also use ghosting to get a fake motion blur effect (does look ok if done properly)

You don’t have to use an accumulation buffer to do motion blur. Humus has a nice and simple demo on his site (works on any Geforce/Radeon):
http://esprit.campus.luth.se/~humus/3D/index.php?page=OpenGL&start=8

I think that’s the technique used in most games. The only big drawback IMO is coming from the user interface or HUD interaction (can be a bit tricky).

Y.

I think that’s the technique used in most games. The only big drawback IMO is coming from the user interface or HUD interaction (can be a bit tricky).

I’d say that the biggest drawback is that it looks really terrible. Either that, or that it isn’t really doing any motion blur.

The “use the last frame to blend with” technique doesn’t do anything real because real motion blur wouldn’t use the last frame. Actual motion blur takes a particular time frame and samples a number of different times in that time frame to produce a single image. Since the next time frame doesn’t overlap with this one, it is incorrect (and look spectacularly bad) to use sample data from the previous time frame in the current one.

Besides, instead of removing strobing, which I think is the primary purpose of motion blur, it typically worsens it.

Nvidia had a paper about motion blur using some vertex and fragment programs. They render a velocity buffer and do blurring with a fragment program and multiple fullscreen quads.

-Ilkka

Just now I tried to do some motion blurring using accumulation buffer(I was just curious). Indeed, it is supported in hardware(guess fragment-program emulated). It does also look fine. The only problem is that it is too slow. When I am doing 5 passes on 1024x768 screen on FX5600 I get about 25 FPS, whats not so bad actually, but my image starts jumping and it’s ugly. With 3 passes it does look ok. It was for the first time thought that I used acc buffer, so maybe I did something wrong…

And if to say the truth, I don’t understand why is the ghosting approach different to the accumulation buffer approach. In both cases I add shifted intencity-blended images together…

Yeah, i admit the ghost technique is not the “nicest” one, but it’s extremely fast (using the non-clearing buffer trick, almost no loss of performance - no need for any additionnal pass), and just requires render-to-texture, which is more widely supported than the accumulation buffer.

Y.

Here’s a quick example I just threw together. Painless, too

www.cs.virginia.edu/~csz9v/random/overexposed.jpg

[This message has been edited by CatAtWork (edited 10-27-2003).]

The fundemental difference is whether you’re blending together a set of successive frames (wrong) or a set of sub-frames between the current and the previous frame (right).

If you look at CatAtWork’s picture, the rotors create a smooth, blurred shape, which produces an illusion of smooth animation. Without motion blur the rotors wouldn’t overlap between successive frames and the brain couldn’t combine the images into a smooth animation, which produces the strobing effect. With ghosting your image would have just several non-overlapping rotors, the animation wouldn’t look any smoother.

Of course you could use the ghosting technique by only showing every n:th frame to the user. That’d be more or less the same technique, only with the accumulation buffer replaced by the more widely supported render-to-texture.

-Ilkka

Originally posted by Korval:
I’d say that the biggest drawback is that it looks really terrible

I use it in my gravity software and I think visually it adds a great deal. Obviously its not ‘correct’ motion blur but so long as the objects dont move too fast it can look pretty good imo.

Edit: Some screenshots, http://www.mars3d.com/Gallery/Gravity3DMB1.JPG http://www.mars3d.com/Gallery/Gravity3DMB2.JPG http://www.mars3d.com/Gallery/Gravity3DMB3.JPG

[This message has been edited by Adrian (edited 10-29-2003).]

If the number of passes you want to render equals the number of samples your card offers in multisample mode, you probably should use this instead of render-to-texture or accumulation buffer. And don’t forget that you can perform supersampling antialiasing for free if you blend several frames together anyway!

btw, the ghosting effect is also called “motion trail” as opposed to “motion blur”.

JustHanging,

There is no ‘fundamental difference’ as you say. Certainly not enough to say that one method is ‘wrong’ and the other is ‘right’. Motion blur, in general, is done over a set amount of time, the time it takes an image to fade from the virtual retina or ccd.

For one thing, who says that someone would only be interested in blurring motion between frames? Especially if the time between frames is variable.

For a camera, with a slow 24 FPS fixed framerate it makes sense to blur by accumulating multiple images together and then present them at a fixed framerate.

However, real-time computer graphics rarely present images at a fixed rate, and are also also, often, much much faster then 24 FPS.

If a computer is rendering at an average 60 FPS, there is barely any time to notice any motion blur with only 16 milliseconds of motion. For that reason, if you want a motion blur similar to film you will need to accumulate motion over 1/24th of a second instead of 1/60th. You can do that by combining the current frame with the last frame in a way which causes contributions from more than 1/24th of a second before to fade completely.

The problem is one of quality, not correctness. If you are only getting 60 FPS, then that means you only have about 3 samples for the blur. However, if you could get a much higher FPS, say 180 FPS, the blur becomes much better because if you are going for a cinematic 1/24th second blur then you are averaging 7.5 samples of blur each frame.

My (mis)understanding is like this:

If we try to make analogy between temporal and spatial techniques, then:
Motion blur is like supersampling (increases quality by subdividing current sample)
Ghosting is like… image blur filter (it only blends current sample with neighbours, with weights often depending on sample distance (here: age) like in Humus demo)

With increasing number of samples:
motion blur always gets better quality
ghosting may get worse quality, as it may get unnaturally long ‘afterimage’ effect

How ‘real’ the effect is:
motion blur simulates how cameras work
ghosting simulates… ghosting on old CRTs (anyone played Asteroids on original HW? )

My (mis)understanding is like this:

No, that is absolutely the corrent analogy. Just like how bluring looks bad next to real antialiasing, ghosting looks bad next to real temporal antialiasing.

Originally posted by Korval:
No, that is absolutely the corrent analogy. Just like how bluring looks bad next to real antialiasing, ghosting looks bad next to real temporal antialiasing.

The difference between blurring and “real antialiasing” is often just the shape and width of the kernel used to combine pixels or sub-pixels. Blurring is often a simple box filter whereas antialiasing typically uses a gausian. Ghosting, otoh, is often an exponential decay. But that’s nitpicking, I guess and still a vast simplification.

The main point is that while temporal antialiasing across multiple frames is not ideal, it is not necessarily wrong, just as using a gausian filter across multiple adjacent pixels is not “wrong,” but is not ideal.

Ideally, you want sub-pixels or some faster approximation thereof to avoid “softening” the image. What is “wrong” IMO is temporal aliasing across multiple frames with either a box filter (linear average) or exponential decay.

However, with a reasonably high frame-rate, if one was to take say three or five frames centered on the present time-step (yes, this means rendering into the future, perhaps predictively) and blend these frames with gausian coefficients, you’d get a much better result than the demos above without requiring sub-frames. Again, sub-frames within the current time-step would be more ideal. But we live with what can afford.

Avi

[This message has been edited by Cyranose (edited 10-29-2003).]

Actually, thinking about it a little more, I do see a mathematical difference between what Korval is calling ‘ghosting’ vs ‘blurring’

With ghosting, the contribution of light to a scene becomes less and less over time, like the fading pixels on a slow CRT.

For blurring, the contribution of light onto a medium like film is equal over every moment the shutter is open.

Using previous frames is perfectly valid to produce a blur, but you would have to do it in such a way that the oldest frame contributes as much as the latest frame. However, the only method I have seen for using previous frames always does so in a way that causes

The fundamental difference between ‘blurring’ and ‘ghosting’ is whether each temporal sample contributes equally, or increasingly less with time, not whether previous frames are used. It just so happens that it is easier to ghost with previous frames because the way a new frame is added to the old one causes old samples to contribute less.

I think our real eyeballs produce ‘ghosting’ and not ‘blurring’. We do not expose film in our eyes and we do not have shutters. We continuously expose light sensitive cells that have a fairly slow response time.

I’m not sure how CCDs work, but I bet they have been engineered to emulate film cameras.

Originally posted by Nakoruru:
I think our real eyeballs produce ‘ghosting’ and not ‘blurring’. We do not expose film in our eyes and we do not have shutters. We continuously expose light sensitive cells that have a fairly slow response time.

Except that your eyes are still in the equation, unless you have some direct-brain rendering technology.

So all this work is to compensate for the aliasing caused by displaying a static image every 1/60th second instead of a continuously changing one. The point is not to simulate how the eye works, but to provide images that fool the eye into believing the series of still images are in fact moving.

Avi

[This message has been edited by Cyranose (edited 10-29-2003).]

The difference between blurring and “real antialiasing” is often just the shape and width of the kernel used to combine pixels or sub-pixels.

No, the difference is precisely what MZ said.

When you antialias, you are taking a number of samples and combining them in some fashion. When you blur, you’re just taking neighboring values and mixing them together. To antialias, you need to be taking additional samples; you are applying new information to the image. When you blur something, you aren’t getting new information; you’re just playing with the old information.

The same goes for temporal antialiasing (aka, motion blur). If you aren’t rendering the frame several times in a single frame time, and thus introducing new information, you aren’t doing real motion blur.

The main point is that while temporal antialiasing across multiple frames is not ideal, it is not necessarily wrong

But it is, not only wrong, but not antialiasing. It is not temporal antialiasing any more than appling a gaussian filter to an image is spatially antialiasing it. If you aren’t adding more samples, you aren’t antialiasing.

Think of it like this. In spatial antialiasing, the color of a pixel is determined only by the samples of the scene that occur within that pixel. Samples from outside of that pixel’s volume do not intrude upon this; if they do, you’re doing the wrong thing.

The same goes for temporal antialiasing. Only samples from the given time-frame matter. If you have a video camera that works at 60fps, for any particular frame being recorded, the only samples that are involved are those taken within that frame’s 16.6 millisecond window. If samples from outside of that time range get involved in this image, then the camera is considered to be damaged.

What is “wrong” IMO is temporal aliasing across multiple frames with either a box filter (linear average) or exponential decay.

Technically, the box filter is the right way to go; no one sample is any more important than another. The problem, ultimately, comes down to not rendering with a high dynamic range (HDR). Without a HDR-based rendering system, doing temporal antialiasing doesn’t look correct. You really need the enhanced precision of an HDR system in order to make it look accurate. Of course, ghosting always looks like crap, irregardless of either the filter or the use of HDR.

Korval, it’s really not worth arguing on this, but some of your statements just don’t fit with what I’ve learned about signal processing. It doesn’t mean what I’ve learned is 100% correct, but your position seems exceedingly narrow.

“Antialiasing” is the removal of aliasing artifacts by a variety of means.

Multisampling, supersampling, and all sorts of methods for adding and combining data are part of that, but not the only part.

Methods that no not employ additional samples are not ideal, as they make assumptions about the spatial or temporal continuity of color data. But they’re not unreasonable.

In the case of convoluting an image, the idea is that the samples you have are treated as discrete point samples in a presumed continuous signal of some assumed basic frequency and phase. That signal is “inferred” from the samples you have and “improved” by a variety of means, with the knowledge that those original assumptions will shape the outcome as much as the color data.

Often, these techniques lose high-frequency data to cover up aliasing artifacts. But that doesn’t make them wrong, just less desirable if you have better AA hardware, for example.

Again, it may not be as good as adding more samples, but there are also lots of “wrong” ways to add more samples too.

On the final point, the box filter over multiple frames IS wrong. Box filters are appropriate when combining intra-frame samples of presumably equal weight. But if you’re forced to blend multiple actual frames (not sub-frames) over time, weighting them towards the present only makes sense.

Avi