资源论文MIXUP INFERENCE :B ETTER EXPLOITING MIXUP TOD EFEND ADVERSARIAL ATTACKS

MIXUP INFERENCE :B ETTER EXPLOITING MIXUP TOD EFEND ADVERSARIAL ATTACKS

2020-01-02 | |  45 |   30 |   0

Abstract

It has been widely recognized that adversarial examples can be easily crafted to fool deep networks, which mainly root from the locally unreasonable behavior nearby input examples. Applying mixup in training provides an effective mechanism to improve generalization performance and model robustness against adversarial perturbations, which introduces the globally linear behavior in-between training examples. However, in previous work, the mixup-trained models only passively defend adversarial attacks in inference by directly classifying the inputs, where the induced global linearity is not well exploited. Namely, since the locality of the adversarial perturbations, it would be more efficient to actively break the locality via the globality of the model predictions. Inspired by simple geometric intuition, we develop an inference principle, named mixup inference (MI), for mixup-trained models. MI mixups the input with other random clean samples, which can shrink and transfer the equivalent perturbation if the input is adversarial. Our experiments on CIFAR-10 and CIFAR-100 demonstrate that MI can further improve the adversarial robustness for the models trained by mixup and its variants.

上一篇:COMBINING Q-L EARNING AND SEARCH WITHA MORTIZED VALUE ESTIMATES

下一篇:COMPRESSIVE TRANSFORMERS FOR LONG -R ANGES EQUENCE MODELLING

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...