PDA

View Full Version : Is this motion blur paper correct?



Peter Maarssen
09-24-2008, 12:55 PM
However the quality of the
final output of each frame depends on the number of
renderings that are combined to make it. Since the whole
scene must be rendered multiple times, this method does
not scale well. If the number of polygons in the scene is
large, rendering the scene multiple times will lower the
frame rate. If the number of times a scene is rendered
per frame is too low for the amount of motion happening
in the scene, then instead of a smooth motion blur, the
rendering produces ghosting or double vision
from http://www.dtc.umn.edu/~ashesh/our_papers/motion-blur.pdf

Sure, the effect they create with their motion blur algorithm looks great but is the above quote correct?
I can think of some scheme where you render to a buffer which holds previous renders using some average algorithm. This would do motion blur for as many frames back as the precision of the buffer allows.

Isn't this how basic motion blur gets done?

Ilian Dinev
09-24-2008, 02:15 PM
The above paragraph talks about the traditional ghosting-type motion-blur. You can see the ghosting clearly on the images there. That type of motion-blur holds image-data of previous frames, with as much precision as necessary - much like your idea to "solve" this. So, that's not the solution. The eyes want to see a _smudged_ line from each previous pixel to that pixel's current-frame location. If the two pixels are 100 pixels away from each other, then you need to blend 100 intermediate frames between the previous and current "keyframe". Or you get just several distinct ghosts overlayed.

ZbuffeR
09-24-2008, 03:17 PM
The quoted paragraph is about comparisons with traditionnal motion blur techniques.
The paper is about image-based motion blur, it scales better and is much less prone to the "see multiple ghost objects" problem, even if it is less correct.

Peter Maarssen
09-25-2008, 03:37 AM
Yeah, you guys are right.



The traditional method of rendering a motion blur with a
computer is, for a single frame, to render the scene at
many discrete time instances, average these renderings
and output to the display device.


It's just that it kind of reads like they are suggesting that for every single frame you always need to rerender the whole scene multiple times.They aren't mentioning that previous renderings can be used.
Or maybe they switch from single image rendering to fast movement game real fast :)