资源论文ADVERSARIAL TRAINING AND PROVABLE DEFENSES :B RIDGING THE GAP

ADVERSARIAL TRAINING AND PROVABLE DEFENSES :B RIDGING THE GAP

2020-01-02 | |  63 |   43 |   0

Abstract

We propose a new method to train neural networks based on a novel combination of adversarial training and provable defenses. The key idea is to model training as a procedure which includes both, the verifier and the adversary. In every iteration, the verifier aims to certify the network using convex relaxation while the adversary tries to find inputs inside that convex relaxation which cause verification to fail. We experimentally show that this training method is promising and achieves the best of both worlds – it produces a state-of-the-art neural network with certified robustness of 58.1% and accuracy of 78.8% on the challenging CIFAR-10 dataset with a 2/255 图片.png perturbation. This significantly improves over the currently known best results of 53.9% certified robustness and 68.3% accuracy.

上一篇:ESCAPING SADDLE POINTS FASTER WITHS TOCHASTIC MOMENTUM

下一篇:MULTI -S CALE REPRESENTATION LEARNING FOR SPA -TIAL FEATURE DISTRIBUTIONS USING GRID CELLS

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Joint Pose and Ex...

    Facial expression recognition (FER) is a challe...