资源论文Asynchronous stochastic convex optimization: the noise is in the noise and SGD don’t care

Asynchronous stochastic convex optimization: the noise is in the noise and SGD don’t care

2020-02-04 | |  69 |   45 |   0

Abstract 

We show that asymptotically, completely asynchronous stochastic gradient procedures achieve optimal (even to constant factors) convergence rates for the solution of convex optimization problems under nearly the same conditions required for asymptotic optimality of standard stochastic gradient procedures. Roughly, the noise inherent to the stochastic approximation scheme dominates any noise from asynchrony. We also give empirical evidence demonstrating the strong performance of asynchronous, parallel stochastic optimization schemes, demonstrating that the robustness inherent to stochastic approximation problems allows substantially faster parallel and asynchronous solution methods. In short, we show that for many stochastic approximation problems, as Freddie Mercury sings in Queen’s Bohemian Rhapsody, “Nothing really matters.”

上一篇:Online Learning with Gaussian Payoffs and Side Observations

下一篇:Large-Scale Bayesian Multi-Label Learning via Topic-Based Label Embeddings

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...