Abstract
In this paper, we propose a decentralized distributed algorithm with stochastic communication
among nodes, building on a sampling method
called “edge sampling”. Such a sampling algorithm
allows us to avoid the heavy peer-to-peer communication cost when combining neighboring weights
on dense networks while still maintains a comparable convergence rate. In particular, we quantitatively analyze its theoretical convergence properties, as well as the optimal sampling rate over
the underlying network. When compared with previous methods, our solution is shown to be unbiased, communication-efficient and suffers from
lower sampling variances. These theoretical findings are validated by both numerical experiments
on the mixing rates of Markov Chains and distributed machine learning problems