Certifying Decision Trees Against Evasion Attacks by Program Analysis

07/06/2020
by   Stefano Calzavara, et al.
0

Machine learning has proved invaluable for a range of different tasks, yet it also proved vulnerable to evasion attacks, i.e., maliciously crafted perturbations of input data designed to force mispredictions. In this paper we propose a novel technique to verify the security of decision tree models against evasion attacks with respect to an expressive threat model, where the attacker can be represented by an arbitrary imperative program. Our approach exploits the interpretability property of decision trees to transform them into imperative programs, which are amenable for traditional program analysis techniques. By leveraging the abstract interpretation framework, we are able to soundly verify the security guarantees of decision tree models trained over publicly available datasets. Our experiments show that our technique is both precise and efficient, yielding only a minimal number of false positives and scaling up to cases which are intractable for a competitor approach.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/02/2019

Treant: Training Evasion-Aware Decision Trees

Despite its success and popularity, machine learning is now recognized a...
research
12/21/2020

Genetic Adversarial Training of Decision Trees

We put forward a novel learning methodology for ensembles of decision tr...
research
12/10/2020

A Decision Tree Lifted Domain for Analyzing Program Families with Numerical Features (Extended Version)

Lifted (family-based) static analysis by abstract interpretation is capa...
research
09/17/2021

Decision Tree Learning with Spatial Modal Logics

Symbolic learning represents the most straightforward approach to interp...
research
04/07/2020

Feature Partitioning for Robust Tree Ensembles and their Certification in Adversarial Scenarios

Machine learning algorithms, however effective, are known to be vulnerab...
research
02/13/2021

GenTree: Using Decision Trees to Learn Interactions for Configurable Software

Modern software systems are increasingly designed to be highly configura...
research
01/10/2022

A Study on Mitigating Hard Boundaries of Decision-Tree-based Uncertainty Estimates for AI Models

Outcomes of data-driven AI models cannot be assumed to be always correct...

Please sign up or login with your details

Forgot password? Click here to reset