资源论文Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates

Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates

2020-03-16 | |  140 |   46 |   0

Abstract

In this paper, we develop distributed optimization algorithms that are provably robust against Byzantine failures—arbitrary and potentially adversarial behavior, in distributed computing systems, with a focus on achieving optimal statistica performance. A main result of this work is a sharp analysis of two robust distributed gradient descent algorithms based on median and trimmed mean operations, respectively. We prove statistical error rates for all of strongly convex, nonstrongly convex, and smooth non-convex population loss functions. In particular, these algorith are shown to achieve order-optimal statistical error rates for strongly convex losses. To achieve better communication efficiency, we further propose a median-based distributed algorithm that is provably robust, and uses only one communication round. For strongly convex quadratic loss, we show that this algorithm achieves the same optimal error rate as the robust distributed gradien descent algorithms.

上一篇:Improved Training of Generative Adversarial Networks using Representative Features

下一篇:Structured Evolution with Compact Architectures for Scalable Policy Optimization

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...