About Motion Blur

Here is a question about how to use several successive images to produce motion blur effect. Hugo Elias gave an article about this effect which can be referred at http://freespace.virgin.net/hugo.elias/graphics/x_motion.htm. In the article, Elias talks about two methods to simulate motion blur, one correct and one incorrect. The correct one is to, for instance, use no1,2,3,4 images to make the first image “motion-blurred” and no5,6,7,8 images to make the second one. So, the frame rate will be greatly reduced to one fourth of the original. The incorrect one, which I think correct, is to, for instance, use no1,2,3,4 images to make the first image “motion-blurred” and no2,3,4,5 images to make the second one. But according to Elias’ opinion, the incorrect one “makes moving objects smear across the screen”. The thing is I think these two methods are not intrinsically different, except that the “incorrect” one greatly increase the fps. So, would you please give me some information about the difference of the two methods?

If I understand the article well, the second method is a simple hack. You just keep the previous rendered images in the framebuffer and you blend the new one on top of the others. This does not work very well to simulate motion blur, because the distinct images can be distinguished easily. This is more like chronophotography than motion blur. With method 1, images are averaged before being displayed, so that you cannot distinguish the original images.

  1. See www.GameTutorials.com

  2. Don’t crosspost (beginner/advanced), many people won’t appreciate it.

real motionblur is generated by having some time to record an image.
this means every frame you have has some time while its drawn onto the film. this results in the motion-blurred time… like this:

x start frame 0
|
|
x end

x start frame 1
|
|
x end

x start frame 2
|
|
x end

etc… gpu’s work like this:

x start frame 0

x start frame 1

x start frame 2

now to get real motionblur, you have to draw the steps between (numerical integration i guess , or call it supersampling)

results like this:

x start frame 0
x next part of frame 0
x next part of frame 0
x end frame 0

x start frame 1
x next of frame 1
x next of frame 1
x end frame 1

etc… takes 4 times the frames to draw… thats why the faked motionblur is build up…

x start frame 0

x start frame 1 and use as next for frame 0

x start frame 2 and use as next for 0 and 1

x start frame 3, end frame 0, use as next for 1 and 2

as you can see you would record onto different parts/frames of a moviestrip at the same time…wich is unrealistic…

but it generates a faat and HUGE motionblur… loved on ps2 btw

kehziah,davepermen: Thanks a lot! Maybe I still have to think it again.
richardve:thank you for your website and sorry for my crossposting.

davepermen: your understanding of the technique is very convictive, because firstly you use the fact how motion blur generates on film and secondly-if I don’t misunderstand-you use four successive frames only once to generate one motion-blurred frame, as what happens in one “motion-blurred time”. I cannot find anything wrong in it. But how about thinking of motion blur directly in human eye, thinking about what happens on retina, skipping any discussion of film.
I think that motion blur has something to do with the fact that image on human retina tends to hold on for a while. Imaging that something is moving fast in front of you, retina, like an endless film, records it at time t1,t2,t3,t4. At time t4, images recorded at t1,t2,t3 still stay on retina, and the four images are blended together. So, motion blur happens. At time t5, t1 disappears and retina records the new image(t5). Then t2,t3,t4,t5 images are blended together to generate a new motion-blurred image. Isn’t it the same with the technique mentioned in the “second method”? I am confused. Any suggestion?

[This message has been edited by hapcafe (edited 03-18-2002).]

[This message has been edited by hapcafe (edited 03-18-2002).]

Maybe I’ve got it. I’ve mistaken what motion-blur really is. I perceived that motion-blur is something happening in human eyes. Now I finally come to realize it is a simulation of a certain effect on film. BLUSH,BLUSH,BLUSH,BLUSH,BLUSH…

The blurred appearance of quickly moving objects is due to “persistence of vision”—which starts with the slow temporal dynamics of photoreceptors. Using discretely sampled displays (movies, computer animations) to simulate real moving objects poses a problem when the spatial location of that object moves more than a small amount between frames, because all of those photoreceptors in between the two spatial locations never got stimulated.

The solution is to implement motion blur in the display, because it cannot happen in your biology. It can be understood in terms of Sampling Theory. You’re sampling a continuous, possibly non-bandlimited signal, at the frame rate of your monitor/movie/whatever. If the real world contains quickly varying luminances, but you only sample them at a slow rate, you’ll get artifacts such as “ghosting” and possibly even reversal of perceived direction (e.g. the wagonwheel effect). The ideal thing to do would be to low-pass filter the signal before you sample it. In computers, this would be computationally very expensive. Movie cameras can implement this solution relatively easily by leaving the shutter open for as close to 100% of the frame rate as possible. Less computationally expensive is to super-sample (in time) any frame that you actually draw.

Unfortunately the link to Hugo Elias’s article you posted is no longer working, but from your description, this latter seems to be the approach he was using. In this case you are essentially increasing your sample rate by the number of super-samples you’ve done, which simply increases the speed at which things begin to produce visible artifacts. It may be good enough. You would want your sample times used to calculate frame 1 to be non-overlapping with the times used to calculate frame 2.

astraw, thanks for your detailed reply.
I really appreciate it.

yes, the eye has its own motion-blur-“lags” as you described… but it has them always. so if those would be the mb’s you’re wanting to fake, why should you? the eye does the job… what we need to fake is the motion-blur on recording the image, the mb of the camera…

btw, the mb of the eye can be quite big blur, if you look some time into the sun you get an infinite blur

The display has a finite frame rate, the graphics has another, (perhaps variable but hopefully consistent with the video), frame rate.

Video shows a series of discrete images which depending on what is being rendered at what speed and at what frame/video rate might produce visible artifacts.

If you are rendering at the video rate you are probably in good shape except for the fastest of objects or viewpoints, if you render at less the dilema is that to correctly motion blur will take longer.

Motion blur using the conventional accumulate the last n frames fading gradually over time is NOT motion blur. This is one of the worst, most persistent and nasty graphics animation misconceptions around.

To correctly motion blur you need to integrate the position/shape/color of the scene between the last frame and this frame, not integrate the last n frames.

It is probably relatively simple to approximate correct motion blur very cheaply today on fast moving objects in the scene with render to texture and use that as an impostor for the multiple accumulations of the object positions for that frame.

So previously you might have said motion blur in real-time was unreasonable because you might as well up the frame rate, games are now running at many multiples of frame rate with fast objects in the scene AND there are a few tricks now possibly with impostors where you don’t have to make the tradeoff between frame rate and motion blur.

Your eye does temporal convolution, but cannot blur a series of descrete images into one so it makes sense to motion blur in the framebuffer. It is completely idiotic to think you are motionbluring using the ‘free’ multiframe render and blur across frames. Monitor manufacturers bust their butts researching fast phosphors and supertwist LCD to avoid this artifact and you should not defeat their efforts in software.

Summary, motion blur is for inbetween frames and is a good thing IF it can be done without reducing the frame rate below the refresh rate of the monitor, or down another refresh if you are already there. Motion blur should probably be applied only to the fastest moving objects. Full scene motion blur should not be attempted unless you are already rendering at significantly greater than video refresh rate (although render to texture with viewport overdraw might be worth a shot for high rates of pitch, roll, yaw), and don’t expect impressive results if you are at video refresh, because we’re not refreshing at 24 Hz like they do in the movies, we have more fps and it hides most stuff you need motion blur for at lower frame rates.

I realize some of this has already been said, this is just my 2 cents.

[This message has been edited by dorbie (edited 04-05-2002).]

if you take a game wich you can get on todays superfast gpu’s and superfast pc’s say 300fps, then you can simply accumulate always 10 of them, and you get neat motionblur with 30fps…
now if you want to implement motionblur, you’re not that stupid and try to get a game working at 300fps… but it should be about 100fps, so that you can render 2 or 3 frames, and accumulate them…
power of the motionblur: if the motionblur is quite accurate, then you dont need 30fps to let it look smooth… 20 are enough (as a lot of movies look quite smooth even on 15fps sometimes!)

I agree with most of what you have said, you are accumulating are inbetween frames. I disagree with the acceptable frame rate limits. Movies are deficient. I think they had plans to do the playback sequences in Brainstorm (Christopher Walken) at 60Hz in the cinema but they cut the funding when the studio head changed. If you are 300Hz I’d suggest that 5 frames motion blurred at 60Hz would be better than 10 at 30Hz. Carmack discussed some of his fps vs motion blur experiments using a Torelli buffer in an interview he gave, it’s worth a listen. I think the rule of thumb should be go for the fps and motion blur only if it’s cheap (individual objects, impostors & other tricks) or you have a frame rate that’s so high motion blur doesn’t cause you to miss any vertical refreshes.

[This message has been edited by dorbie (edited 04-05-2002).]

hm… i have a different oppinion on what does look good and what not…

i prever lower resolutions over ultrahigh resolutions. they look too clean to me. i prefer a software-rendered rtrt demo over all the fsaa4x 1280x1024 realtime quake3’s, too… simply because they look too clean to me…
i dislike triangles in fact, i want natural stuff. what i see in the nature is no perfect line, is no static clean geometry, but diffuse chaotic systems. what i see is no perfect sharp brilliant movie with 60hz but overblended scenes with in fact 1/3fps but with fancy motionblur (we only realize the world at 1/3fps, so why do i need more if i get the correct input? )

i prefer much stuff others never wanted on pc… and i will win, one time i will win and you’ll see… why? cause my stuff looks different (oh, and else you can disable motionblur, disable fsaa, disable image-processing and simply render triangles again )

we only realize the world at 1/3fps, so why do i need more if i get the correct input?

1/3 of a frame per second? Are you saying our eyes only update once every 3 seconds?

We percieve the world much faster than that. I’m not totally sure on the speed at which our eyes update, but it’s quite high.

I also disagree with you on the frame-rate thing. At least on CRT based monitors, 60hz is still too low… the flickering is very apparent. In the future I think TV,film,games should strive for 100hz minimum. TV’s already do 100hz, even if the source image isn’t at that… but 60 on a computer display is too low… I can’t work on a monitor less than 85.

You may find these links interesting (but you may not):- http://amo.net/NT/02-21-01FPS.html http://amo.net/NT/05-24-01FPS.html

Yup just as I thought…

Many industry entertainment tests (such as visual simulation) show that the sweet spot for humans to “distance themselves from reality” is in between 85 and 120 fps.

first: i sayd we realize the world at 1/3fps, means our brain runs at this speed. but the eyes accumulate the images between and store them even in order, so its not that important

i never talked about the screen-speed. yes screens should have 120hrz because the flimmering is annoying and makes you wanna sleep. but this only because of the drawing. most of the time it is black and for very short it gets damn bright (at 1 pixel even…)
if you take a flatscreen you dont see this flickering anymore… flatscreen aren’t even stressing at 43hz! thats simply because the old image stays till the new one.

next: if i take old videotapes, and put them in my videoplayer, they run with 25fps here. they look perfectly smooth. why? because those 25frames in every second have sampled the complete range of its 1/25seconds and accumulated them… take some fast moving stuff with your cam and you see it sharp while the video is playing. then press stop and look at one frame individually. its blurred over half the screen! but you’ll see it sharp and perfectly smooth in motion!
i’m talking with 15fps about the extreme… it does not look perfectly smooth anymore, but, if taken with a camera in reallife, it is about enough to look like motion. 30fps are more than enough with motionblur. every engine drawing more frames should accumulate them automatically to let the screen draw 30fps (not less, no vsync should be missed!) but with the accumulated ones. the resulting quality of motion is bether than simply draw them directly (cause then if you have vsync on you dont see more frames wich is always a loss, and if you don’t have vsync this means that the different steps are on different parts of the image, not accumulated together…

dave,

the reason old videotapes (at 25 or 30 fps) look okay is probably because TV screens have significant inter-frame blurring because of the exponential decay in the phosphorous (more so than modern CRT displays)

I’m assuming you know that most video (including VHS) is 50/59.94 fields per second, interlaced to 25/29.97 frames per second.

Here’s how I think of it: motion blur is used to add information (added time resolution) - just as supersampling anti-aliasing is used to simulate a higher resolution. It’s not 100% true (there are other effects too), but it clearly tells us that we need to generate more frames, and blend, we can’t simply blend old frames - it would be as useful as low-pass filtering a frame instead of supersampling it.