资源论文Communication-Efficient Distributed Optimization using an Approximate Newton-type Method

Communication-Efficient Distributed Optimization using an Approximate Newton-type Method

2020-03-04 | |  86 |   37 |   0

Abstract

We present a novel Newton-type method for distributed optimization, which is particularly wel suited for stochastic optimization and learning problems. For quadratic objectives, the method enjoys a linear rate of convergence which provably improves with the data size, requiring an essentially constant number of iterations under reasonable assumptions. We provide theoretical and empirical evidence of the advantages of our method compared to other approaches, such as one-shot parameter averaging and ADMM.

上一篇:Learning Theory and Algorithms for Revenue Optimization in Second-Price Auctions with Reserve

下一篇:Near-Optimal Joint Object Matching via Convex Relaxation

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...