On the Robustness of Ensemble-Based Machine Learning Against Data Poisoning

09/28/2022
by   Marco Anisetti, et al.
47

Machine learning is becoming ubiquitous. From financial to medicine, machine learning models are boosting decision-making processes and even outperforming humans in some tasks. This huge progress in terms of prediction quality does not however find a counterpart in the security of such models and corresponding predictions, where perturbations of fractions of the training set (poisoning) can seriously undermine the model accuracy. Research on poisoning attacks and defenses even predates the introduction of deep neural networks, leading to several promising solutions. Among them, ensemble-based defenses, where different models are trained on portions of the training set and their predictions are then aggregated, are getting significant attention, due to their relative simplicity and theoretical and practical guarantees. The work in this paper designs and implements a hash-based ensemble approach for ML robustness and evaluates its applicability and performance on random forests, a machine learning model proved to be more resistant to poisoning attempts on tabular datasets. An extensive experimental evaluation is carried out to evaluate the robustness of our approach against a variety of attacks, and compare it with a traditional monolithic model based on random forests.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/24/2019

Privacy Risks of Securing Machine Learning Models against Adversarial Examples

The arms race between attacks and defenses for machine learning models h...
research
03/02/2018

Driving Digital Rock towards Machine Learning: predicting permeability with Gradient Boosting and Deep Neural Networks

We present a research study aimed at testing of applicability of machine...
research
09/18/2020

A Framework of Randomized Selection Based Certified Defenses Against Data Poisoning Attacks

Neural network classifiers are vulnerable to data poisoning attacks, as ...
research
10/24/2022

Ares: A System-Oriented Wargame Framework for Adversarial ML

Since the discovery of adversarial attacks against machine learning mode...
research
05/08/2021

Provable Guarantees against Data Poisoning Using Self-Expansion and Compatibility

A recent line of work has shown that deep networks are highly susceptibl...
research
05/09/2022

Wavelet-Based Hybrid Machine Learning Model for Out-of-distribution Internet Traffic Prediction

Efficient prediction of internet traffic is essential for ensuring proac...
research
12/31/2020

Coded Machine Unlearning

Models trained in machine learning processes may store information about...

Please sign up or login with your details

Forgot password? Click here to reset