资源论文DECENTRALIZED DEEP LEARNING WITH ARBITRARYC OMMUNICATION COMPRESSION

DECENTRALIZED DEEP LEARNING WITH ARBITRARYC OMMUNICATION COMPRESSION

2020-01-02 | |  67 |   44 |   0

Abstract

Decentralized training of deep learning models is a key element for enabling data privacy and on-device learning over networks, as well as for efficient scaling to large compute clusters. As current approaches are limited by network bandwidth, we propose the use of communication compression in the decentralized training context. We show that C HOCO -SGD achieves linear speedup in the number of workers for arbitrary high compression ratios on general non-convex functions, and non-IID training data. We demonstrate the practical performance of the algorithm in two key scenarios: the training of deep learning models (i) over decentralized user devices, connected by a peer-to-peer network and (ii) in a datacenter.

上一篇:WHITE NOISE ANALYSIS OF NEURAL NETWORKS

下一篇:THINKING WHILE MOVING :D EEP REINFORCEMENTL EARNING WITH CONCURRENT CONTROL

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...