资源论文Gradient Descent Meets Shift-and-Invert Preconditioning for Eigenvector Computation

Gradient Descent Meets Shift-and-Invert Preconditioning for Eigenvector Computation

2020-02-13 | |  101 |   44 |   0

Abstract 

Shift-and-invert preconditioning, as a classic acceleration technique for the leading eigenvector computation, has received much attention again recently, owing to fast least-squares solvers for efficiently approximating matrix inversions in power iterations. In this work, we adopt an inexact Riemannian gradient descent perspective to investigate this technique on the effect of the step-size scheme. The shift-and-inverted power method is included as a special case with adaptive step-sizes. Particularly, two other step-size settings, i.e., constant step-sizes and Barzilai-Borwein (BB) step-sizes, are examined theoretically and/or empirically. We present a novel convergence  analysis for the constant step-size setting that  achieves a rate at image.png, where image.pngrepresents the i-th largest eigenvalue of the given real symmetric matrix and p is the multiplicity of image.png . Our experimental studies show that the proposed algorithm can be significantly faster than the shift-and-inverted power method in practice.

上一篇:Sample Efficient Stochastic Gradient Iterative Hard Thresholding Method for Stochastic Sparse Linear Regression with Limited Attribute Observation

下一篇:Generating Informative and Diverse ConversationalResponses via Adversarial Information Maximization

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Joint Pose and Ex...

    Facial expression recognition (FER) is a challe...