资源论文The Fast Convergence of Incremental PCA

The Fast Convergence of Incremental PCA

2020-01-16 | |  78 |   44 |   0

Abstract

We consider a situation in which we see samples 图片.pngdrawn i.i.d. from some distribution with mean zero and unknown covariance A. We wish to compute the top eigenvector of A in an incremental fashion with an algorithm that maintains an estimate of the top eigenvector in O(d) space, and incrementally adjusts the estimate with each new data point that arrives. Two classical such schemes are due to Krasulina (1969) and Oja (1983). We give finite-sample convergence rates for both.

上一篇:Inverse Density as an Inverse Problem: the Fredholm Equation Approach

下一篇:Online Learning in Markov Decision Processes with Adversarially Chosen Transition Probability Distributions

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...