资源论文Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks

Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks

2020-02-18 | |  38 |   50 |   0

Abstract 

High sensitivity of neural networks against malicious perturbations on inputs causes security concerns. To take a steady step towards robust classifiers, we aim to create neural network models provably defended from perturbations. Prior certification work requires strong assumptions on network structures and massive computational costs, and thus the range of their applications was limited. From the relationship between the Lipschitz constants and prediction margins, we present a computationally efficient calculation technique to lower-bound the size of adversarial perturbations that can deceive networks, and that is widely applicable to various complicated networks. Moreover, we propose an efficient training procedure that robustifies networks and significantly improves the provably guarded areas around data points. In experimental evaluations, our method showed its ability to provide a non-trivial guarantee and enhance robustness for even large networks.

上一篇:Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks

下一篇:Nonlocal Neural Networks, Nonlocal Diffusion and Nonlocal Modeling

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...