资源论文D2 : Decentralized Training over Decentralized Data

D2 : Decentralized Training over Decentralized Data

2020-03-16 | |  62 |   43 |   0

Abstract

While training a machine learning model using multiple workers, each of which collects data from its own data source, it would be useful when the data collected from different workers are unique and different. Ironically, recent analysis of decentralized parallel stochastic gradient descent (D-PSGD) relies on the assumption that the data hosted on different workers are not too different. In this paper, we ask the question: Can we design a decentralized parallel stochastic gradient descent algorithm that is less sensitive to t data variance across workers? In this paper, we present 图片.png , a novel decentralized parallel stochas tic gradient descent algorithm designed for large data variance among workers (imprecisely, “decentralized” data). The core of 图片.png is a variance reduction extension of D-PSGD. It improvesthe 2 1 convergence rate from O 图片.png   to图片.pngwhere 图片.pngdenotes the variance among 2 data on different workers. As a result, D is robust to data variance among workers. We empirically evaluated 图片.png on image classification tasks, where each worker has access to only the data of a limited set of labels, and find that 图片.png significantly outperforms D-PSGD.

上一篇:RadialGAN: Leveraging multiple datasets to improve target-specific predictive models using Generative Adversarial Networks

下一篇:Stability and Generalization of Learning Algorithms that Converge to Global Optima

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...