资源论文Transferable Adversarial Perturbations

Transferable Adversarial Perturbations

2019-10-23 | |  46 |   38 |   0
Abstract. State-of-the-art deep neural network classifiers are highly vulnerable to adversarial examples which are designed to mislead classi- fiers with a very small perturbation. However, the performance of blackbox attacks (without knowledge of the model parameters) against deployed models always degrades significantly. In this paper, We propose a novel way of perturbations for adversarial examples to enable black-box transfer. We first show that maximizing distance between natural images and their adversarial examples in the intermediate feature maps can improve both white-box attacks (with knowledge of the model parameters) and black-box attacks. We also show that smooth regularization on adversarial perturbations enables transferring across models. Extensive experimental results show that our approach outperforms state-of-the-art methods both in white-box and black-box attacks.

上一篇:A Dataset for Lane Instance Segmentation in Urban Environments

下一篇:Deep Component Analysis via Alternating Direction Neural Networks

用户评价
全部评价

热门资源

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...