Efficient Training of Robust Decision Trees Against Adversarial Examples

12/18/2020
by   Daniël Vos, et al.
0

In the present day we use machine learning for sensitive tasks that require models to be both understandable and robust. Although traditional models such as decision trees are understandable, they suffer from adversarial attacks. When a decision tree is used to differentiate between a user's benign and malicious behavior, an adversarial attack allows the user to effectively evade the model by perturbing the inputs the model receives. We can use algorithms that take adversarial attacks into account to fit trees that are more robust. In this work we propose an algorithm, GROOT, that is two orders of magnitude faster than the state-of-the-art-work while scoring competitively on accuracy against adversaries. GROOT accepts an intuitive and permissible threat model. Where previous threat models were limited to distance norms, we allow each feature to be perturbed with a user-specified parameter: either a maximum distance or constraints on the direction of perturbation. Previous works assumed that both benign and malicious users attempt model evasion but we allow the user to select which classes perform adversarial attacks. Additionally, we introduce a hyperparameter rho that allows GROOT to trade off performance in the regular and adversarial settings.

READ FULL TEXT
research
09/08/2021

Robust Optimal Classification Trees Against Adversarial Examples

Decision trees are a popular choice of explainable model, but just like ...
research
05/19/2022

Focused Adversarial Attacks

Recent advances in machine learning show that neural models are vulnerab...
research
09/28/2018

Adversarial Attacks and Defences: A Survey

Deep learning has emerged as a strong and efficient framework that can b...
research
10/22/2020

An Efficient Adversarial Attack for Tree Ensembles

We study the problem of efficient adversarial attacks on tree based ense...
research
11/30/2021

Mitigating Adversarial Attacks by Distributing Different Copies to Different Users

Machine learning models are vulnerable to adversarial attacks. In this p...
research
04/07/2020

Feature Partitioning for Robust Tree Ensembles and their Certification in Adversarial Scenarios

Machine learning algorithms, however effective, are known to be vulnerab...

Please sign up or login with your details

Forgot password? Click here to reset