Abstract
We introduce a method to compute optical flflow at multiple scales of motion, without resorting to multiresolution or combinatorial methods. It addresses the key problem of small objects moving fast, and resolves the artifificial binding between how large an object is and how fast it can move before being diffused away by classical scale-space. Even with no learning, it achieves top performance on the most challenging optical flflow benchmark. Moreover, the results are interpretable, and indeed we list the assumptions underlying our method explicitly. The key to our approach is the matching progression from slow to fast, as well as the choice of interpolation method, or equivalently the prior, to fifill in regions where the data allows it. We use several offthe-shelf components, with relatively low sensitivity to parameter tuning. Computational cost is comparable to the state-of-the-art.