Abstract
Videos contain complex spatially-varying motion blur due to the combination of ob ject motion, camera motion, and depth vari- ation with finite shutter speeds. Existing methods to estimate optical flow, deblur the images, and segment the scene fail in such cases. In par- ticular, boundaries between differently moving ob jects cause problems, because here the blurred images are a combination of the blurred appear- ances of multiple surfaces. We address this with a novel layered model of scenes in motion. From a motion-blurred video sequence, we jointly estimate the layer segmentation and each layer’s appearance and motion. Since the blur is a function of the layer motion and segmentation, it is completely determined by our generative model. Given a video, we for- mulate the optimization problem as minimizing the pixel error between the blurred frames and images synthesized from the model, and solve it using gradient descent. We demonstrate our approach on synthetic and real sequences.