Abstract
The problem of finding a low rank approximation of a given measurement matrix is of key interest in computer vision. If all the el- ements of the measurement matrix are available, the problem can be solved using factorization. However, in the case of missing data no sat- isfactory solution exists. Recent approaches replace the rank term with the weaker (but convex) nuclear norm. In this paper we show that this heuristic works poorly on problems where the locations of the missing entries are highly correlated and structured which is a common situation in many applications. Our main contribution is the derivation of a much stronger convex relaxation that takes into account not only the rank function but also the data. We propose an algorithm which uses this relaxation to solve the rank approximation problem on matrices where the given measure- ments can be organized into overlapping blocks without missing data. The algorithm is computationally efficient and we have applied it to sev- eral classical problems including structure from motion and linear shape basis estimation. We demonstrate on both real and synthetic data that it outperforms state-of-the-art alternatives. 1