资源论文Follow the Compressed Leader: Faster Online Learning of Eigenvectors and Faster MMWU

Follow the Compressed Leader: Faster Online Learning of Eigenvectors and Faster MMWU

2020-03-09 | |  55 |   43 |   0

Abstract

The online problem of computing the top eigenvector is fundamental to machine learning. The famous matrix-multiplicative-weight-update (MMWU) framework solves this online problem and gives optimal regret. However, since MMWU runs very slow due to the computation of matrix exponentials, researchers proposed the follow-the-perturbed-leader (FTPL) framework which is faster, but a factor 图片.png worse than the optimal regret for dimension-d matrices. We propose a follow-the-compressed-leader framework which, not only matches the optimal regret of MMWU (up to polylog factors), but runs no slower than FTPL. Our main idea is to “compress” the MMWU strategy to dimension 3 in the adversarial setting, or dimension 1 in the stochastic setting. This resolves an open question regarding how to obtain both (nearly) optimal and efficient algorithms for the online eigenvector problem.

上一篇:Max-value Entropy Search for Efficient Bayesian Optimization

下一篇:Scaling Up Sparse Support Vector Machines by Simultaneous Feature and Sample Reduction

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...