资源论文Exponentially convergent stochastic k-PCA without variance reduction

Exponentially convergent stochastic k-PCA without variance reduction

2020-02-21 | |  71 |   48 |   0

Abstract

We present Matrix Krasulina, an algorithm for online k-PCA, by generalizing the classic Krasulina’s method [1] from vector to matrix case. We show, both theoretically and empirically, that the algorithm naturally adapts to data lowrankness and converges exponentially fast to the ground-truth principal subspace. Notably, our result suggests that despite various recent efforts to accelerate the convergence of stochastic-gradient based methods by adding a O(n)-time variance reduction step, for the k-PCA problem, a truly online SGD variant suffices to achieve exponential convergence on intrinsically low-rank data.

上一篇:Asymptotic Guarantees for Learning Generative Models with the Sliced-Wasserstein Distance

下一篇:Adaptive Cross-Modal Few-shot Learning

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...