Feature Partitioning for Robust Tree Ensembles and their Certification in Adversarial Scenarios

04/07/2020
by   Stefano Calzavara, et al.
0

Machine learning algorithms, however effective, are known to be vulnerable in adversarial scenarios where a malicious user may inject manipulated instances. In this work we focus on evasion attacks, where a model is trained in a safe environment and exposed to attacks at test time. The attacker aims at finding a minimal perturbation of a test instance that changes the model outcome. We propose a model-agnostic strategy that builds a robust ensemble by training its basic models on feature-based partitions of the given dataset. Our algorithm guarantees that the majority of the models in the ensemble cannot be affected by the attacker. We experimented the proposed strategy on decision tree ensembles, and we also propose an approximate certification method for tree ensembles that efficiently assess the minimal accuracy of a forest on a given dataset avoiding the costly computation of evasion attacks. Experimental evaluation on publicly available datasets shows that proposed strategy outperforms state-of-the-art adversarial learning algorithms against evasion attacks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/02/2019

Treant: Training Evasion-Aware Decision Trees

Despite its success and popularity, machine learning is now recognized a...
research
05/05/2023

Verifiable Learning for Robust Tree Ensembles

Verifying the robustness of machine learning models against evasion atta...
research
03/10/2011

COMET: A Recipe for Learning and Using Large Ensembles on Massive Data

COMET is a single-pass MapReduce algorithm for learning on large-scale d...
research
07/06/2020

Certifying Decision Trees Against Evasion Attacks by Program Analysis

Machine learning has proved invaluable for a range of different tasks, y...
research
12/18/2020

Efficient Training of Robust Decision Trees Against Adversarial Examples

In the present day we use machine learning for sensitive tasks that requ...
research
06/27/2022

Adversarial Example Detection in Deployed Tree Ensembles

Tree ensembles are powerful models that are widely used. However, they a...
research
03/31/2022

Scalable Whitebox Attacks on Tree-based Models

Adversarial robustness is one of the essential safety criteria for guara...

Please sign up or login with your details

Forgot password? Click here to reset