Abstract
While training a machine learning model using multiple workers, each of which collects data from its own data source, it would be useful when the data collected from different workers are unique and different. Ironically, recent analysis of decentralized parallel stochastic gradient descent (D-PSGD) relies on the assumption that the data hosted on different workers are not too different. In this paper, we ask the question: Can we design a decentralized parallel stochastic gradient descent algorithm that is less sensitive to t data variance across workers? In this paper, we present , a novel decentralized parallel stochas tic gradient descent algorithm designed for large data variance among workers (imprecisely, “decentralized” data). The core of is a variance reduction extension of D-PSGD. It improvesthe 2 1 convergence rate from O towhere denotes the variance among 2 data on different workers. As a result, D is robust to data variance among workers. We empirically evaluated on image classification tasks, where each worker has access to only the data of a limited set of labels, and find that significantly outperforms D-PSGD.