资源论文Faster Eigenvector Computation via Shift-and-Invert Preconditioning

Faster Eigenvector Computation via Shift-and-Invert Preconditioning

2020-03-06 | |  60 |   37 |   0

Abstract

We give faster algorithms and improved sample complexities for the fundamental problem of estimating the top eigenvector. Given an explicit matrix 图片.png we show how to compute an  approximate h top eigenvector i of 图片.png in time 图片.png Here nnz(A) is the number of nonzeros in A, sr(A) is the stable rank, and gap is the relative eigengap. We also consider an online setting in which, given a stream of i.i.d. samples from a distribution D with covariance matrix Σ and a vector x0 which is an O(gap) approximate top eigenvector for Σ, we show how to refine x0 to an  approxv(D) imation using 图片.png samples from D. Here v(D) is a natural notion of variance. Combining our algorithm with previous work to initialize x0 , we obtain improved sample complexities and runtimes under a variety of assumptions on D. We achieve our results via a robust analysis of the classic shift-and-invert preconditioning method. This technique lets us reduce eigenvector computation to approximately solving a series of linear systems with fast stochastic gradient methods. Proceedings of the 33 rd International Conference on MachLearning, New York, NY, USA, 2016. JMLR: W&CP volume48. Copyright 2016 by the author(s).

上一篇:On the Consistency of Feature Selection With Lasso for Non-linear Targets

下一篇:Why Regularized Auto-Encoders Learn Sparse Representation?

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...