资源论文Evasion and Hardening of Tree Ensemble Classifiers

Evasion and Hardening of Tree Ensemble Classifiers

2020-03-06 | |  61 |   33 |   0

Abstract

Classifier evasion consists in finding for a given instance x the “nearest” instance x0 such that the classifier predictions of x and x0 are different. We present two novel algorithms for systematically computing evasions for tree ensembles such as boosted trees and random forests. Our first algorithm uses a Mixed Integer Linear Program solver and finds the optimal evading instance under an expressive set of constraints. Our second algorithm trades off optimality for speed by using symbolic prediction, a novel algorithm for fast finite differences on tree ensembles. On a digit recognition task, we demonstrate that both gradient boosted trees and random forests are extremely susceptible to evasions. Finally, we harden a boosted tree model without loss of predictive accuracy by augmenting the training set of each boosting round with evading instances, a technique we call adversarial boosting.

上一篇:Greedy Column Subset Selection: New Bounds and Distributed Algorithms

下一篇:Sequence to Sequence Training of CTC-RNNs with Partial Windowing

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...