资源论文MACER: ATTACK -FREE AND SCALABLE ROBUSTT RAINING VIA MAXIMIZING CERTIFIED RADIUS

MACER: ATTACK -FREE AND SCALABLE ROBUSTT RAINING VIA MAXIMIZING CERTIFIED RADIUS

2020-01-02 | |  78 |   57 |   0

Abstract

Adversarial training is one of the most popular ways to learn robust models but is usually attack-dependent and time costly. In this paper, we propose the MACER algorithm, which learns robust models without using adversarial training but performs better than all existing provable l2 -defenses. Recent work (Cohen et al., 2019) shows that randomized smoothing can be used to provide certified l2 radius to smoothed classifiers, and our algorithm trains provably robust smoothed classifiers via MAximizing the CErtified Radius (MACER). The attack-free characteristic makes MACER faster to train and easier to optimize. In our experiments, we show that our method can be applied to modern deep neural networks on a wide range of datasets, including Cifar-10, ImageNet, MNIST, and SVHN. For all tasks, MACER spends less training time than state-of-the-art adversarial training algorithms, and the learned models achieve larger average certified radius. Our code is available at https://github.com/MacerAuthors/macer.

上一篇:MMA TRAINING :D IRECT INPUT SPACE MARGINM AXIMIZATION THROUGH ADVERSARIAL TRAINING

下一篇:HOW TO 0WN NAS IN YOUR SPARE TIME

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...