资源论文FEDERATED LEARNING WITH MATCHED AVERAGING

FEDERATED LEARNING WITH MATCHED AVERAGING

2020-01-02 | |  168 |   72 |   0

Abstract

Federated learning allows edge devices to collaboratively learn a shared model while keeping the training data on device, decoupling the ability to do model training from the need to store the data in the cloud. We propose Federated matched averaging (FedMA) algorithm designed for federated learning of modern neural network architectures e.g. convolutional neural networks (CNNs) and LSTMs. FedMA constructs the shared global model in a layer-wise manner by matching and averaging hidden elements (i.e. channels for convolution layers; hidden states for LSTM; neurons for fully connected layers) with similar feature extraction signatures. Our experiments indicate that FedMA outperforms popular state-of-theart federated learning algorithms on deep CNN and LSTM architectures trained on real world datasets, while improving the communication efficiency.

上一篇:CONTINUAL LEARNING WITH HYPERNETWORKS

下一篇:SPECTRAL EMBEDDING OF REGULARIZED BLOCKM ODELS

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...