资源论文DEFENDING AGAINST PHYSICALLY REALIZABLE AT-TACKS ON IMAGE CLASSIFICATION

DEFENDING AGAINST PHYSICALLY REALIZABLE AT-TACKS ON IMAGE CLASSIFICATION

2019-12-31 | |  71 |   45 |   0

Abstract
We study the problem of defending deep neural network approaches for image classification from physically realizable attacks. First, we demonstrate that the two most scalable and effective methods for learning robust models, adversarial training with PGD attacks and randomized smoothing, exhibit limited effectiveness against three of the highest profile physical attacks. Next, we propose a new abstract adversarial model, rectangular occlusion attacks, in which an adversary places a small adversarially crafted rectangle in an image, and develop two approaches for efficiently computing the resulting adversarial examples. Finally, we demonstrate that adversarial training using our new attack yields image classification models that exhibit high robustness against the physically realizable attacks we study, offering the first effective generic defense against such attacks.

上一篇:RECONSTRUCTING CONTINUOUS DISTRIBUTIONS OF3D PROTEIN STRUCTURE FROM CRYO -EM IMAGES

下一篇:UNDERSTANDING WHY NEURAL NETWORKS GENER -ALIZE WELL THROUGH GSNR OF PARAMETERS

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...