PDA

View Full Version : Motion Blur



cutting_crew
10-27-2003, 06:43 AM
wow, what an interesting topic. Unless i have been in a coma lately, there is STILL no graphics card that supports the Accumluation Buffer to implement motion blur. so anyone have another solution? i read some past posts about it and people were thinking the voodoo line would offer it, but 3DFX crashed and since then NVIDIA hasnt done much with it. anyone have any links on some sample code? thanks.

PanzerSchreck
10-27-2003, 07:10 AM
ATI's Radeon supports the accumulation buffer in HW since the R300-core (Radeon9500/9700) and i've already tried using it for motion blur and found it fast enough for use in a game.Don't know about the GeForceFX-series though,but heard that it also supports it in HW with the newest drivers.

[This message has been edited by PanzerSchreck (edited 10-27-2003).]

DopeFish
10-27-2003, 07:22 AM
You can also use ghosting to get a fake motion blur effect (does look ok if done properly)

Ysaneya
10-27-2003, 07:35 AM
You don't have to use an accumulation buffer to do motion blur. Humus has a nice and simple demo on his site (works on any Geforce/Radeon):
http://esprit.campus.luth.se/~humus/3D/index.php?page=OpenGL&start=8

I think that's the technique used in most games. The only big drawback IMO is coming from the user interface or HUD interaction (can be a bit tricky).

Y.

Korval
10-27-2003, 08:20 AM
I think that's the technique used in most games. The only big drawback IMO is coming from the user interface or HUD interaction (can be a bit tricky).

I'd say that the biggest drawback is that it looks really terrible. Either that, or that it isn't really doing any motion blur.

The "use the last frame to blend with" technique doesn't do anything real because real motion blur wouldn't use the last frame. Actual motion blur takes a particular time frame and samples a number of different times in that time frame to produce a single image. Since the next time frame doesn't overlap with this one, it is incorrect (and look spectacularly bad) to use sample data from the previous time frame in the current one.

JustHanging
10-27-2003, 12:25 PM
Besides, instead of removing strobing, which I think is the primary purpose of motion blur, it typically worsens it.

Nvidia had a paper about motion blur using some vertex and fragment programs. They render a velocity buffer and do blurring with a fragment program and multiple fullscreen quads.

-Ilkka

Zengar
10-27-2003, 01:06 PM
Just now I tried to do some motion blurring using accumulation buffer(I was just curious). Indeed, it is supported in hardware(guess fragment-program emulated). It does also look fine. The only problem is that it is too slow. When I am doing 5 passes on 1024x768 screen on FX5600 I get about 25 FPS, whats not so bad actually, but my image starts jumping and it's ugly. With 3 passes it does look ok. It was for the first time thought that I used acc buffer, so maybe I did something wrong...

And if to say the truth, I don't understand why is the ghosting approach different to the accumulation buffer approach. In both cases I add shifted intencity-blended images together...

Ysaneya
10-27-2003, 02:32 PM
Yeah, i admit the ghost technique is not the "nicest" one, but it's extremely fast (using the non-clearing buffer trick, almost no loss of performance - no need for any additionnal pass), and just requires render-to-texture, which is more widely supported than the accumulation buffer.

Y.

CatAtWork
10-27-2003, 07:20 PM
Here's a quick example I just threw together. Painless, too http://www.opengl.org/discussion_boards/ubb/smile.gif

www.cs.virginia.edu/~csz9v/random/overexposed.jpg (http://www.cs.virginia.edu/~csz9v/random/overexposed.jpg)

[This message has been edited by CatAtWork (edited 10-27-2003).]

JustHanging
10-28-2003, 01:02 AM
The fundemental difference is whether you're blending together a set of successive frames (wrong) or a set of sub-frames between the current and the previous frame (right).

If you look at CatAtWork's picture, the rotors create a smooth, blurred shape, which produces an illusion of smooth animation. Without motion blur the rotors wouldn't overlap between successive frames and the brain couldn't combine the images into a smooth animation, which produces the strobing effect. With ghosting your image would have just several non-overlapping rotors, the animation wouldn't look any smoother.

Of course you could use the ghosting technique by only showing every n:th frame to the user. That'd be more or less the same technique, only with the accumulation buffer replaced by the more widely supported render-to-texture.

-Ilkka

Adrian
10-28-2003, 09:46 AM
Originally posted by Korval:
I'd say that the biggest drawback is that it looks really terrible

I use it in my gravity software and I think visually it adds a great deal. Obviously its not 'correct' motion blur but so long as the objects dont move too fast it can look pretty good imo.

Edit: Some screenshots, http://www.mars3d.com/Gallery/Gravity3DMB1.JPG http://www.mars3d.com/Gallery/Gravity3DMB2.JPG http://www.mars3d.com/Gallery/Gravity3DMB3.JPG


[This message has been edited by Adrian (edited 10-29-2003).]

Xmas
10-29-2003, 01:15 AM
If the number of passes you want to render equals the number of samples your card offers in multisample mode, you probably should use this instead of render-to-texture or accumulation buffer. And don't forget that you can perform supersampling antialiasing for free if you blend several frames together anyway!

btw, the ghosting effect is also called "motion trail" as opposed to "motion blur".

Nakoruru
10-29-2003, 07:41 AM
JustHanging,

There is no 'fundamental difference' as you say. Certainly not enough to say that one method is 'wrong' and the other is 'right'. Motion blur, in general, is done over a set amount of time, the time it takes an image to fade from the virtual retina or ccd.

For one thing, who says that someone would only be interested in blurring motion between frames? Especially if the time between frames is variable.

For a camera, with a slow 24 FPS fixed framerate it makes sense to blur by accumulating multiple images together and then present them at a fixed framerate.

However, real-time computer graphics rarely present images at a fixed rate, and are also also, often, much much faster then 24 FPS.

If a computer is rendering at an average 60 FPS, there is barely any time to notice any motion blur with only 16 milliseconds of motion. For that reason, if you want a motion blur similar to film you will need to accumulate motion over 1/24th of a second instead of 1/60th. You can do that by combining the current frame with the last frame in a way which causes contributions from more than 1/24th of a second before to fade completely.

The problem is one of quality, not correctness. If you are only getting 60 FPS, then that means you only have about 3 samples for the blur. However, if you could get a much higher FPS, say 180 FPS, the blur becomes much better because if you are going for a cinematic 1/24th second blur then you are averaging 7.5 samples of blur each frame.

MZ
10-29-2003, 08:59 AM
My (mis)understanding is like this:

If we try to make analogy between temporal and spatial techniques, then:
Motion blur is like supersampling (increases quality by subdividing current sample)
Ghosting is like... image blur filter (it only blends current sample with neighbours, with weights often depending on sample distance (here: age) like in Humus demo)

With increasing number of samples:
motion blur always gets better quality
ghosting may get worse quality, as it may get unnaturally long 'afterimage' effect

How 'real' the effect is:
motion blur simulates how cameras work
ghosting simulates... ghosting on old CRTs (anyone played Asteroids on original HW? http://www.opengl.org/discussion_boards/ubb/smile.gif )

Korval
10-29-2003, 09:58 AM
My (mis)understanding is like this:

No, that is absolutely the corrent analogy. Just like how bluring looks bad next to real antialiasing, ghosting looks bad next to real temporal antialiasing.

Nakoruru
10-29-2003, 11:51 AM
Actually, thinking about it a little more, I do see a mathematical difference between what Korval is calling 'ghosting' vs 'blurring'

With ghosting, the contribution of light to a scene becomes less and less over time, like the fading pixels on a slow CRT.

For blurring, the contribution of light onto a medium like film is equal over every moment the shutter is open.

Using previous frames is perfectly valid to produce a blur, but you would have to do it in such a way that the oldest frame contributes as much as the latest frame. However, the only method I have seen for using previous frames always does so in a way that causes

The fundamental difference between 'blurring' and 'ghosting' is whether each temporal sample contributes equally, or increasingly less with time, not whether previous frames are used. It just so happens that it is easier to ghost with previous frames because the way a new frame is added to the old one causes old samples to contribute less.

I think our real eyeballs produce 'ghosting' and not 'blurring'. We do not expose film in our eyes and we do not have shutters. We continuously expose light sensitive cells that have a fairly slow response time.

I'm not sure how CCDs work, but I bet they have been engineered to emulate film cameras.

Cyranose
10-29-2003, 11:51 AM
Originally posted by Korval:
No, that is absolutely the corrent analogy. Just like how bluring looks bad next to real antialiasing, ghosting looks bad next to real temporal antialiasing.

The difference between blurring and "real antialiasing" is often just the shape and width of the kernel used to combine pixels or sub-pixels. Blurring is often a simple box filter whereas antialiasing typically uses a gausian. Ghosting, otoh, is often an exponential decay. But that's nitpicking, I guess and still a vast simplification.

The main point is that while temporal antialiasing across multiple frames is not ideal, it is not necessarily wrong, just as using a gausian filter across multiple adjacent pixels is not "wrong," but is not ideal.

Ideally, you want sub-pixels or some faster approximation thereof to avoid "softening" the image. What is "wrong" IMO is temporal aliasing across multiple frames with either a box filter (linear average) or exponential decay.

However, with a reasonably high frame-rate, if one was to take say three or five frames centered on the present time-step (yes, this means rendering into the future, perhaps predictively) and blend these frames with gausian coefficients, you'd get a much better result than the demos above without requiring sub-frames. Again, sub-frames within the current time-step would be more ideal. But we live with what can afford.

Avi

[This message has been edited by Cyranose (edited 10-29-2003).]

Cyranose
10-29-2003, 12:05 PM
Originally posted by Nakoruru:
I think our real eyeballs produce 'ghosting' and not 'blurring'. We do not expose film in our eyes and we do not have shutters. We continuously expose light sensitive cells that have a fairly slow response time.

Except that your eyes are still in the equation, unless you have some direct-brain rendering technology. http://www.opengl.org/discussion_boards/ubb/smile.gif

So all this work is to compensate for the aliasing caused by displaying a static image every 1/60th second instead of a continuously changing one. The point is not to simulate how the eye works, but to provide images that fool the eye into believing the series of still images are in fact moving.

Avi


[This message has been edited by Cyranose (edited 10-29-2003).]

Korval
10-29-2003, 01:55 PM
The difference between blurring and "real antialiasing" is often just the shape and width of the kernel used to combine pixels or sub-pixels.

No, the difference is precisely what MZ said.

When you antialias, you are taking a number of samples and combining them in some fashion. When you blur, you're just taking neighboring values and mixing them together. To antialias, you need to be taking additional samples; you are applying new information to the image. When you blur something, you aren't getting new information; you're just playing with the old information.

The same goes for temporal antialiasing (aka, motion blur). If you aren't rendering the frame several times in a single frame time, and thus introducing new information, you aren't doing real motion blur.


The main point is that while temporal antialiasing across multiple frames is not ideal, it is not necessarily wrong

But it is, not only wrong, but not antialiasing. It is not temporal antialiasing any more than appling a gaussian filter to an image is spatially antialiasing it. If you aren't adding more samples, you aren't antialiasing.

Think of it like this. In spatial antialiasing, the color of a pixel is determined only by the samples of the scene that occur within that pixel. Samples from outside of that pixel's volume do not intrude upon this; if they do, you're doing the wrong thing.

The same goes for temporal antialiasing. Only samples from the given time-frame matter. If you have a video camera that works at 60fps, for any particular frame being recorded, the only samples that are involved are those taken within that frame's 16.6 millisecond window. If samples from outside of that time range get involved in this image, then the camera is considered to be damaged.


What is "wrong" IMO is temporal aliasing across multiple frames with either a box filter (linear average) or exponential decay.

Technically, the box filter is the right way to go; no one sample is any more important than another. The problem, ultimately, comes down to not rendering with a high dynamic range (HDR). Without a HDR-based rendering system, doing temporal antialiasing doesn't look correct. You really need the enhanced precision of an HDR system in order to make it look accurate. Of course, ghosting always looks like crap, irregardless of either the filter or the use of HDR.

Cyranose
10-29-2003, 02:43 PM
Korval, it's really not worth arguing on this, but some of your statements just don't fit with what I've learned about signal processing. It doesn't mean what I've learned is 100% correct, but your position seems exceedingly narrow.

"Antialiasing" is the removal of aliasing artifacts by a variety of means.

Multisampling, supersampling, and all sorts of methods for adding and combining data are part of that, but not the only part.

Methods that no not employ additional samples are not ideal, as they make assumptions about the spatial or temporal continuity of color data. But they're not unreasonable.

In the case of convoluting an image, the idea is that the samples you have are treated as discrete point samples in a presumed continuous signal of some assumed basic frequency and phase. That signal is "inferred" from the samples you have and "improved" by a variety of means, with the knowledge that those original assumptions will shape the outcome as much as the color data.

Often, these techniques lose high-frequency data to cover up aliasing artifacts. But that doesn't make them wrong, just less desirable if you have better AA hardware, for example.

Again, it may not be as good as adding more samples, but there are also lots of "wrong" ways to add more samples too.

On the final point, the box filter over multiple frames IS wrong. Box filters are appropriate when combining intra-frame samples of presumably equal weight. But if you're forced to blend multiple actual frames (not sub-frames) over time, weighting them towards the present only makes sense.

Avi

Korval
10-29-2003, 03:24 PM
"Antialiasing" is the removal of aliasing artifacts by a variety of means.

Aliasing and antialiasing have always been something of a pet-peeve of mine. I consider an antialiasing algorithm to be an algorithm that, when taken to the limit of infinite "something" (whatever number or numbers limit the effectiveness of the technique), that the algorithm will correct remove (rather than covering up or hiding) all aliasing artifacts that the algorithm is attempting to remove.

Supersampling fits this. The limit as the number of samples goes to infinity of a supersampling algorithm is a still frame that has had all aliasing removed properly. Obviously, it doesn't deal with temporal aliasing, but it isn't supposed to.

No mere blur filter can do this. Signal processing tells us that, in order to be assured of a certain level of aliasing, you must take a certain number of signals (the Nqyuist limit, I believe it is called). Blur filters don't add samples, so they cannot decrease the level of aliasing in the image.

You can employ a blur filter to make the aliasing look less bad. However, the noise that is added is not the accurate noise than an antialiasing method would add. This doesn't actually solve the aliasing problem; it simply covers it up. It is the equivalent of putting garbage in a closet; sure, nobody can see it, but it's still there and probably smells a bit http://www.opengl.org/discussion_boards/ubb/wink.gif


On the final point, the box filter over multiple frames IS wrong. Box filters are appropriate when combining intra-frame samples of presumably equal weight.

My mistake; I thought by "frame", you meant sub-frames of a full frame time.

phlake
10-29-2003, 05:00 PM
accumulating over 1/24th of a second is still not quite an accurate simulation of cinematic blur, because the shutter is not open the entire time. a better simulation would involve building motion blur over a smaller frame of time based on actual shutter speeds. even in these situations you'd want to compile enough frames together for the blur that you achieve a continuous blur rather than just a few discrete images. depending on your standards of quality, it's unlikely that this could be done in real-time for an appreciable number of polygons.

this is a problem with every real-time "motion blur" effect i've seen... it looks trippy, but it doesn't look "right".

Korval
10-29-2003, 05:49 PM
depending on your standards of quality, it's unlikely that this could be done in real-time for an appreciable number of polygons.

It can be done on today's hardware. You'd have to sacrifice virtually every other effect to do it, though, so it probably isn't worth it yet. And, as I pointed out earlier, it doesn't look very good even when done right unless you're willing to do HDR.

JustHanging
10-30-2003, 12:01 AM
And as a means of temporal antialiasing it might never be worth it in realtime graphics since it's typically faster to render n times more frames than n times supersampling.

Of course this doesn't include very fast moving, or vibrating (the worst case of temporal aliasing as far as I know) objects. We'd need a way to determine the number of samples per object, based on their speeds, to make it worthwile.

-Ilkka

Nakoruru
10-30-2003, 07:59 AM
phlake, saying 1/24th of a second was a simplification. My post was already long enough ^_^

I think that 'trippy' effect of many motion blur demos is due to an exageration of the effect. It is a lot like the fact that people went nuts with colored lights when they first became doable, then they created overly shiny bumpy stuff when bump maps became possible.

It is like a new effect is not worth it unless it is overtly obvious when the reality is that each new step towards realism in graphics will be increasingly subtle. i.e., you know that the scene looks dramatically better, but you may not be able to put a finger on why.

The term 'trippy' comes from the drug culture, specifically from LSD use, which is known to cause a blurring effect called 'trails'. Trails are likely a result of the LSD slowing down the response time of the cells in the retina, or of the brain processing visual information in such a way that it holds onto old sensory information longer.

Anyway, the trippy demos are really only wrong in that they either do not render enough FPS to create a smooth trail and/or they exaggerate the effect too much.

Cinematic motion blur (simulation of exposed film), as well as perceptual motion blur (simulation of what we see) are both valid.

Xmas
10-30-2003, 06:11 PM
Originally posted by Nakoruru:
Cinematic motion blur (simulation of exposed film), as well as perceptual motion blur (simulation of what we see) are both valid.
But you only really need the latter one when you want to exaggerate it (as in a game protagonist taking drugs). Otherwise your eyes will already do the job.

Nakoruru
10-31-2003, 06:09 AM
No, your eye do not do the job! Otherwise there would been no need at all to ever simulate motion blur, and blur on film would be a huge problem, not a huge help (24 FPS is rather low, without motion blur created by film, movies would look like slide shows). In real life, an object moves continuously. On a screen it moves discreetly.

Have there been any studies of how high the FPS has to be before your eyes can do the job? I cannot see the difference between any FPS above 75. But, maybe higher FPS is still needed before natural motion blur kicks in.

I should write a problem which displays a white dot on a black background at 640x480 so the monitor can be set to 120Hz or more and then display it goting at various speeds at various FPS.

The monitor and graphical FPS would probably have to be incredibly fast (maybe greater than 1000 FPS) before it would blur as much as a ball on a string being spun really fast. Until you get FPS and monitor refresh that fast, your brain will percieve it as a series of still pictures of balls that are flashing. Like if you spun a ball on a string in a strob light. Even at 200Hz/200FPS.

Xmas
10-31-2003, 08:09 AM
Originally posted by Nakoruru:
No, your eye do not do the job! Otherwise there would been no need at all to ever simulate motion blur, and blur on film would be a huge problem, not a huge help (24 FPS is rather low, without motion blur created by film, movies would look like slide shows). In real life, an object moves continuously. On a screen it moves discreetly.
Yes, that's why you need to simulate motion blur, but *not* motion trail/ghosting.

Won
10-31-2003, 09:13 AM
Korval --

There are essentially two ways of accomplishing anti-aliasing. Your intuition gives you one approach: the over-sampling method. In this case, you sample a signal at a high frequency and use a reconstruction filter to downsample it to the target frequency. The OpenGL example would be multisampling, where the "higher frequency" is the super-rez you get from the multisample buffer and the reconstruction filter is a combination of the filter kernel, screen and eye response. Same with supersampling.

The other way is to guarantee that the signal you generate is band-limited to the target frequency. The OpenGL example is mip-mapping.

Sampling multiple frames (the supersampling approach) is the one you seem to advocate, but you need to recognize from a signal processing perspective "blurring" is also valid, and doesn't necessarily entail a loss of information. The goal in blurring is to band-limit your temporal activity to eliminate the strobe effect when frame rate undersamples animation.

Granted, the simpler and more flexible approach is the oversampling approach, but that's also quite brute force and likely out of the capabilities of all but the most current hardware. ATI's Animusic demo, for example, does a good job of performing non-oversampled motion blur.

My guess is that a good method would use a compromise between the two: you have a small number of oversampled frames (2x/3x) and use the geometry blurring in between those frames. You then get a piecewise linear approximation of motion. Bonus points if each of your frames are jittered in space to give you spatial anti-aliasing at the same time.

-Won

Won
10-31-2003, 09:31 AM
Actually: here's an interesting application of render-to-vertex-array or programmable tesselators.

Automatically generate blur geometry that samples along the path of motion given per-vertex position its various derivatives and a time scale.

Hah...this would probably spell trouble for shadowing and whatnot. At least we'll be needing more vertices!

-Won