资源论文Square Deal: Lower Bounds and Improved Relaxations for Tensor Recovery

Square Deal: Lower Bounds and Improved Relaxations for Tensor Recovery

2020-03-04 | |  64 |   45 |   0

Abstract

Recovering a low-rank tensor from incomplete information is a recurring problem in signal processing and machine learning. The most popular convex relaxation of this problem minimizes the sum of the nuclear norms (SNN) of the unfolding matrices of the tensor. We show that this approach can be substantially suboptimal: reliably recovering a K-way n×n×· · · ×n tensor of Tucker rank (r, r, . . . , r) from Gaussian measurements requires 图片.png observations. In contrast, a certain (intractable) nonconvex formulation needs only 图片.png observations. We introduce a simple, new convex relaxation, which partially bridges this gap. Our new formulation succeeds with 图片.png observations. The lower bound for the SNN model follows from our new result on recovering signals with multiple structures (e.g. sparse, low rank), which indicates the significant suboptimality of the common approach of minimizing the sum of individual sparsity inducing norms (e.g. 图片.png , nuclear norm). Our new tractable formulation for low-rank tensor recovery shows how the sample complexity can be reduced by designing convex regularizers that exploit several structures jointl

上一篇:Learning by Stretching Deep Networks

下一篇:Rank-One Matrix Pursuit for Matrix Completion

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...