资源论文Conditional Accelerated Lazy Stochastic Gradient Descent

Conditional Accelerated Lazy Stochastic Gradient Descent

2020-03-09 | |  61 |   42 |   0

Abstract

In this work we introduce a conditional accelerated lazy stochastic gradient descent algorithm with optimal number of calls to a stochastic firstorder oracle and convergence rate 图片.png improving over the projection-free, Online Frank-Wolfe based stochastic gradient descent of (Hazan and Kale, 2012) with convergence rate 图片.png

上一篇:Variational Dropout Sparsifies Deep Neural Networks

下一篇:Gradient Coding: Avoiding Stragglers in Distributed Learning

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...