资源论文Gradient Coding: Avoiding Stragglers in Distributed Learning

Gradient Coding: Avoiding Stragglers in Distributed Learning

2020-03-09 | |  66 |   57 |   0

Abstract

We propose a novel coding theoretic framework for mitigating stragglers in distributed learning. We show how carefully replicating data blocks and coding across gradients can provide tolerance to failures and stragglers for synchronous Gradient Descent. We implement our schemes in python (using MPI) to run on Amazon EC2, and show how we compare against baseline approaches in running time and generalization error.

上一篇:Conditional Accelerated Lazy Stochastic Gradient Descent

下一篇:Generalization and Equilibrium in Generative Adversarial Nets (GANs)

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...