资源论文Data Poisoning against Differentially-Private Learners: Attacks and Defenses

Data Poisoning against Differentially-Private Learners: Attacks and Defenses

2019-10-10 | |  81 |   46 |   0
Abstract Data poisoning attacks aim to manipulate the model produced by a learning algorithm by adversarially modifying the training set. We consider differential privacy as a defensive measure against this type of attack. We show that private learners are resistant to data poisoning attacks when the adversary is only able to poison a small number of items. However, this protection degrades as the adversary is allowed to poison more data. We emprically evaluate this protection by designing attack algorithms targeting objective and output perturbation learners, two standard approaches to differentially-private machine learning. Experiments show that our methods are effective when the attacker is allowed to poison suf- ficiently many training items

上一篇:BeatGAN: Anomalous Rhythm Detection using Adversarially Generated Time Series

下一篇:Dilated Convolution with Dilated GRU for Music Source Separation

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...