资源论文BREAKING CERTIFIED DEFENSES :S EMANTIC ADVER -SARIAL EXAMPLES WITH SPOOFED ROBUSTNESS CER -TIFICATES

BREAKING CERTIFIED DEFENSES :S EMANTIC ADVER -SARIAL EXAMPLES WITH SPOOFED ROBUSTNESS CER -TIFICATES

2020-01-02 | |  66 |   43 |   0

Abstract
To deflect adversarial attacks, a range of “certified” classifiers have been proposed. In addition to labeling an image, certified classifiers produce (when possible) a certificate guaranteeing that the input image is not an `p -bounded adversarial example. We present a new attack that exploits not only the labelling function of a classifier, but also the certificate generator. The proposed method applies large perturbations that place images far from a class boundary while maintaining the imperceptibility property of adversarial examples. The proposed “Shadow Attack” causes certifiably robust networks to mislabel an image and simultaneously produce a “spoofed” certificate of robustness.

上一篇:UNRESTRICTED ADVERSARIAL EXAMPLESVIA SEMANTIC MANIPULATION

下一篇:TOWARDS HIERARCHICAL IMPORTANCE ATTRIBU -TION :E XPLAINING COMPOSITIONAL SEMANTICS FORN EURAL SEQUENCE MODELS

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...