资源论文Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks

Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks

2020-02-19 | |  58 |   46 |   0

Abstract

The problem of adversarial robustness has been studied extensively for neural networks. However, for boosted decision trees and decision stumps there are almost no results, even though they are widely used in practice (e.g. XGBoost) due to their accuracy, interpretability, and efficiency. We show in this paper that for boosted decision stumps the exact min-max robust loss and test error for an l1 -attack can be computed in O(T log T ) time per input, where T is the number of decision stumps and the optimal update step of the ensemble can be done in O(图片.png T log T ), where n is the number of data points. For boosted trees we show how to efficiently calculate and optimize an upper bound on the robust loss, which leads to state-of-the-art robust test error for boosted trees on MNIST (12.5% for 图片.png = 0.3), FMNIST (23.2% for 图片.png = 0.1), and CIFAR-10 (74.7% for 图片.png = 8/255). Moreover, the robust test error rates we achieve are competitive to the ones of provably robust convolutional networks. The code of all our experiments is available at http://github.com/max-andr/provably-robust-boosting.

上一篇:Unified Sample-Optimal Property Estimation in Near-Linear Time

下一篇:Likelihood-Free Overcomplete ICA and Applications in Causal Discovery

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...