Evasion and Hardening of Tree Ensemble Classifiers

09/25/2015
by   Alex Kantchelian, et al.
0

Classifier evasion consists in finding for a given instance x the nearest instance x' such that the classifier predictions of x and x' are different. We present two novel algorithms for systematically computing evasions for tree ensembles such as boosted trees and random forests. Our first algorithm uses a Mixed Integer Linear Program solver and finds the optimal evading instance under an expressive set of constraints. Our second algorithm trades off optimality for speed by using symbolic prediction, a novel algorithm for fast finite differences on tree ensembles. On a digit recognition task, we demonstrate that both gradient boosted trees and random forests are extremely susceptible to evasions. Finally, we harden a boosted tree model without loss of predictive accuracy by augmenting the training set of each boosting round with evading instances, a technique we call adversarial boosting.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset