资源论文DETECTING AND DIAGNOSING ADVERSARIALI MAGES WITH CLASS -C ONDITIONAL CAPSULER ECONSTRUCTIONS

DETECTING AND DIAGNOSING ADVERSARIALI MAGES WITH CLASS -C ONDITIONAL CAPSULER ECONSTRUCTIONS

2020-01-02 | |  70 |   40 |   0

Abstract

Adversarial examples raise questions about whether neural network models are sensitive to the same visual features as humans. In this paper, we first detect adversarial examples or otherwise corrupted images based on a class-conditional reconstruction of the input. To specifically attack our detection mechanism, we propose the Reconstructive Attack which seeks both to cause a misclassification and a low reconstruction error. This reconstructive attack produces undetected adversarial examples but with much smaller success rate. Among all these attacks, we find that CapsNets always perform better than convolutional networks. Then, we diagnose the adversarial examples for CapsNets and find that the success of the reconstructive attack was proportional to the visual similarity between the source and target class. Additionally, the resulting perturbations can cause the input image to appear visually more like the target class and hence become nonadversarial. These suggest that CapsNets use features that are more aligned with human perception and address the central issue raised by adversarial examples.

上一篇:PSEUDO -L IDAR++:ACCURATE DEPTH FOR 3D OBJECT DETECTION INAUTONOMOUS DRIVING

下一篇:TRIPLE WINS :B OOSTING ACCURACY, ROBUSTNESSAND EFFICIENCY TOGETHER BY ENABLING INPUT-A DAPTIVE INFERENCE

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...