资源论文Tight Certificates of Adversarial Robustness for Randomly Smoothed Classifiers

Tight Certificates of Adversarial Robustness for Randomly Smoothed Classifiers

2020-02-19 | |  47 |   43 |   0

Abstract

Strong theoretical guarantees of robustness can be given for ensembles of classifiers generated by input randomization. Specifically, an 图片.png2 bounded adversary cannot alter the ensemble prediction generated by an additive isotropic Gaussian noise, where the radius for the adversary depends on both the variance of the distribution as well as the ensemble margin at the point of interest. We build on and considerably expand this work across broad classes of distributions. In particular, we offer adversarial robustness guarantees and associated algorithms for the discrete case where the adversary is 图片.png0 bounded. Moreover, we exemplify how the guarantees can be tightened with specific assumptions about the function class of the classifier such as a decision tree. We empirically illustrate these results with and without functional restrictions across image and molecule datasets.1

上一篇:Variational Bayesian Optimal Experimental Design

下一篇:Latent Distance Estimation for Random Geometric Graphs

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...