资源论文Stochastic PCA with e2 and e1 Regularization

Stochastic PCA with e2 and e1 Regularization

2020-03-16 | |  45 |   40 |   0

Abstract

We revisit convex relaxation based methods for stochastic optimization of principal component analysis (PCA). While methods that directly solve the nonconvex problem have been shown to be optimal in terms of statistical and computational efficiency, the methods based on convex relaxation have been shown to enjoy comparable, or even superior, empirical performance – this motivates the need for a deeper formal understanding of the latter. Therefore, in this paper, we study variants of stochastic gradient descent for a convex relaxation of PCA with (a) 图片.png2 , (b)图片.png1 , and (c elastic net (图片.png1 + 图片.png2 ) regularization in the hope t these variants yield (a) better iteration complexit (b) better control on the rank of the intermediate iterates, and (c) both, respectively. We show, theoretically and empirically, that compared to previous work on convex relaxation based methods, the proposed variants yield faster convergence and improve overall runtime to achieve a certain userspecified -suboptimality on the PCA objective. Furthermore, the proposed methods are shown to converge both in terms of the PCA objective as well as the distance between subspaces. However, there still remains a gap in computational requirements for the proposed methods when compared with existing nonconvex approaches.

上一篇:IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures

下一篇:Leveraging Well-Conditioned Bases: Streaming and Distributed Summaries in Minkowski p-Norms

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...