Abstract Traditional video compression methods obtain a compact representation for image frames by computing coarse motion fifields defifined on patches of pixels called blocks, in order to compensate for the motion in the scene across frames. This piecewise constant approximation makes the motion fifield effificiently encodable, but it introduces block artifacts in the warped image frame. In this paper, we address the problem of estimating dense motion fifields that, while accurately predicting one frame from a given reference frame by warping it with the fifield, are also compressible. We introduce a representation for motion fifields based on wavelet bases, and approximate the compressibility of their coeffificients with a piecewise smooth surrogate function that yields an objective function similar to classical optical flflow formulations. We then show how to quantize and encode such coeffificients with adaptive precision. We demonstrate the effectiveness of our approach by comparing its performance with a state-of-the-art wavelet video encoder. Experimental results on a number of standard flflow and video datasets reveal that our method signifificantly outperforms both block-based and optical-flflow-based motion compensation algorithms.