资源论文Do Outliers Ruin Collaboration?

Do Outliers Ruin Collaboration?

2020-03-20 | |  86 |   32 |   0

Abstract

We consider the problem of learning a binary classifier from n different data sources, among which at most an ? fraction are adversarial. The overhead is defined as the ratio between the sample complexity of learning in this setting and that of learning the same hypothesis class on a single data distribution. We present an algorithm that achieves an 图片.png overhead, which is proved to be worst-case optimal. We also discuss the potential challenges to the design of a computationally efficient learning algorithm with a small overhead.

上一篇:Classification from Pairwise Similarity and Unlabeled Data

下一篇:Escaping Saddles with Stochastic Gradients

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...