资源论文A unified variance-reduced accelerated gradient method for convex optimization

A unified variance-reduced accelerated gradient method for convex optimization

2020-02-21 | |  44 |   41 |   0

Abstract

We propose a novel randomized incremental gradient algorithm, namely, VArianceReduced Accelerated Gradient (Varag ), for finite-sum optimization. Equipped with a unified step-size policy that adjusts itself to the value of the condition number, Varag exhibits the unified optimal rates of convergence for solving smooth convex finite-sum problems directly regardless of their strong convexity. Moreover, Varag is the first accelerated randomized incremental gradient method that benefits from the strong convexity of the data-fidelity term to achieve the optimal linear convergence. It also establishes an optimal linear rate of convergence for solving a wide class of problems only satisfying a certain error bound condition rather than strong convexity. Varag can also be extended to solve stochastic finite-sum problems.

上一篇:Finite-Sample Analysis for SARSA with Linear Function Approximation

下一篇:Submodular Function Minimization with Noisy Evaluation Oracle

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...