资源论文Distributed Delayed Stochastic Optimization

Distributed Delayed Stochastic Optimization

2020-01-08 | |  70 |   40 |   0

Abstract

We analyze the convergence of gradient-based optimization algorithms whose updates depend on delayed stochastic gradient information. The main application of our results is to the development of distributed minimization algorithms where a master node performs parameter updates while worker nodes compute stochastic gradients based on local information in parallel, which may give rise to delays due to asynchrony. Our main contribution is to show that for smooth stochastic problems, the delays are asymptotically negligible. In application to distributed optimization, we show n-node architectures whose optimization error in stochastic problems—in spite of asynchronous delays—scales asymptotically as 图片.png which is known to be optimal even in the absence of delays.

上一篇:Learning with the Weighted Trace-norm under Arbitrary Sampling Distributions

下一篇:Efficient anomaly detection using bipartite k-NN graphs

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...