资源论文Doubly Accelerated Stochastic Variance Reduced Dual Averaging Method for Regularized Empirical Risk Minimization

Doubly Accelerated Stochastic Variance Reduced Dual Averaging Method for Regularized Empirical Risk Minimization

2020-02-10 | |  57 |   38 |   0

Abstract 

We develop a new accelerated stochastic gradient method for efficiently solving the convex regularized empirical risk minimization problem in mini-batch settings. The use of mini-batches has become a golden standard in the machine learning community, because the mini-batch techniques stabilize the gradient estimate and can easily make good use of parallel computing. The core of our proposed method is the incorporation of our new “double acceleration” technique and variance reduction technique. We theoretically analyze our proposed method and show that our method much improves the mini-batch efficiencies  of previous accelerated stochastic methods, and essentially only needs size image.png mini-batches for achieving the optimal iteration complexities for both non-strongly and strongly convex objectives, where n is the training set size. Further, we show that even in non-mini-batch settings, our method achieves the best known convergence rate for non-strongly convex and strongly convex objectives.

上一篇:On Blackbox Backpropagation and Jacobian Sensing

下一篇:Gradient Methods for Submodular Maximization

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...