资源论文CERTIFIED DEFENSES FOR ADVERSARIAL PATCHES

CERTIFIED DEFENSES FOR ADVERSARIAL PATCHES

2020-01-02 | |  76 |   39 |   0

Abstract

Adversarial patch attacks are one of the most practical threat models against realworld computer vision systems. This paper studies the certified and empirical performance of defenses against patch attacks. We begin with a set of experiments showing that most existing defenses, which work by pre-processing input images to mitigate adversarial noise, are easily broken by simple white-box adversaries. Motivated by this finding, we present an approach for certified defense against patch attacks, and propose methods for fast training of these models. Finally, we experiment with different patch shapes for testing, and observe that robustness transfers across shapes surprisingly well.

上一篇:THE INGREDIENTS OF REAL -W ORLDROBOTIC REINFORCEMENT LEARNING

下一篇:EMPIR: ENSEMBLES OF MIXED PRECISION DEEPN ETWORKS FOR INCREASED ROBUSTNESS AGAINSTA DVERSARIAL ATTACKS

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...