资源论文You Only Propagate Once: Accelerating Adversarial Training via Maximal Principle

You Only Propagate Once: Accelerating Adversarial Training via Maximal Principle

2020-02-19 | |  39 |   48 |   0

Abstract

Deep learning achieves state-of-the-art results in many tasks in computer vision and natural language processing. However, recent works have shown that deep networks can be vulnerable to adversarial perturbations, which raised a serious robustness issue of deep networks. Adversarial training, typically formulated as a robust optimization problem, is an effective way of improving the robustness of deep networks. A major drawback of existing adversarial training algorithms is the computational overhead of the generation of adversarial examples, typically far greater than that of the network training. This leads to the unbearable overall computational cost of adversarial training. In this paper, we show that adversarial training can be cast as a discrete time differential game. Through analyzing the Pontryagin’s Maximum Principle (PMP) of the problem, we observe that the adversary update is only coupled with the parameters of the first layer of the network. This inspires us to restrict most of the forward and back propagation within the first layer of the network during adversary updates. This effectively reduces the total number of full forward and backward propagation to only one for each group of adversary updates. Therefore, we refer to this algorithm YOPO (You Only Propagate Once). Numerical experiments demonstrate that YOPO can achieve comparable defense accuracy with approximately 1/5 ? 1/4 GPU time of the projected gradient descent (PGD) algorithm [15].3 ? Equal Contribution † Corresponding Authors 3 Our codes are available at https://github.com/a1600012888/YOPO-You-Only-Propagate-Once33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.

上一篇:Wasserstein Dependency Measure for Representation Learning

下一篇:Extreme Classification in Log Memory using Count-Min Sketch: A Case Study of Amazon Search with 50M Products

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...