资源论文Communication Complexity of Distributed Convex Learning and Optimization

Communication Complexity of Distributed Convex Learning and Optimization

2020-02-04 | |  90 |   36 |   0

Abstract 

We study the fundamental limits to communication-efficient distributed methods for convex learning and optimization, under different assumptions on the information available to individual machines, and the types of functions considered. We identify cases where existing algorithms are already worst-case optimal, as well as cases where room for further improvement is still possible. Among other things, our results indicate that without similarity between the local objective functions (due to statistical data similarity or otherwise) many communication rounds may be required, even if the machines have unbounded computational power.

上一篇:Weighted Theta Functions and Embeddings with Applications to Max-Cut, Clustering and Summarization

下一篇:Sampling from Probabilistic Submodular Models

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...