资源论文Exploiting Numerical Sparsity for Efficient Learning : Faster Eigenvector Computation and Regression

Exploiting Numerical Sparsity for Efficient Learning : Faster Eigenvector Computation and Regression

2020-02-13 | |  66 |   41 |   0

Abstract 

In this paper, we obtain improved running times for regression and top eigenvector computation for numerically sparse matrices. Given a data matrix A image.png where every row a image.png has image.png and numerical sparsity at most s, i.e. image.png, we provide faster algorithms for these problems in many parameter settings. For top eigenvector computation, we obtain a running time of image.png image.pngwhere gap > 0 is the relative gap between the top two eigenvectors of A> A and r is the stable rank of A. This running time improves upon the previous best unaccelerated running time of image.png For regression, we obtain a running time of image.pngwhere µ > 0 is the smallest eigenvalue of A> A. This running time improves upon the previous best unaccelerated running time of image.png. This result expands the regimes where regression can be solved in nearly linear time from when image.png. Furthermore, we obtain similar improvements even when row norms and numerical sparsities are non-uniform and we show how to achieve even faster running times by accelerating using approximate proximal point [9] / catalyst [15]. Our running times depend only on the size of the input and natural numerical measures of the matrix, i.e. eigenvalues and `p norms, making progress on a key open problem regarding optimal running times for efficient large-scale learning.

上一篇:Variational Bayesian Monte Carlo

下一篇:Low-Rank Tucker Decomposition of Large Tensors Using TensorSketch

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...