资源论文Provable robustness against all adversariallp-perturbations for p ≥1

Provable robustness against all adversariallp-perturbations for p ≥1

2020-01-02 | |  65 |   41 |   0

Abstract

In recent years several adversarial attacks and defenses have been proposed. Often seemingly robust models turn out to be non-robust when more sophisticated attacks are used. One way out of this dilemma are provable robustness guarantees. While provably robust models for specific lp -perturbation models have been developed, we show that they do not come with any guarantee against other lq -perturbations. We propose a new regularization scheme, MMR-Universal, for ReLU networks which enforces robustness wrt l1 and l? -perturbations and show how that leads to the first provably robust models wrt any lp -norm for p≥ 1.

上一篇:MINIMIZING FLOP STO LEARN EFFICIENT SPARSER EPRESENTATIONS

下一篇:Online and Stochastic Optimization beyondLipschitz Continuity: ARiemannian Approach

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...