Regularisation Can Mitigate Poisoning Attacks: A Novel Analysis Based on Multiobjective Bilevel Optimisation

02/28/2020
by   Javier Carnerero-Cano, et al.
2

Machine Learning (ML) algorithms are vulnerable to poisoning attacks, where a fraction of the training data can be manipulated to deliberately degrade the algorithms' performance. Optimal poisoning attacks, which can be formulated as bilevel optimisation problems, help to assess the robustness of learning algorithms in worst-case scenarios. However, current attacks against algorithms with hyperparameters typically assume that these hyperparameters are constant and thus ignore the effect the attack has on them. In this paper, we show that this approach leads to an overly pessimistic view of the robustness of the learning algorithms tested. We propose a novel optimal attack formulation that considers the effect of the attack on the hyperparameters by modelling the attack as a multiobjective bilevel optimisation problem. We apply this novel attack formulation to ML classifiers using L_2 regularisation and show that, in contrast to results previously reported in the literature, L_2 regularisation enhances the stability of the learning algorithms and helps to partially mitigate poisoning attacks. Our empirical evaluation on different datasets confirms the limitations of previous poisoning attack strategies, evidences the benefits of using L_2 regularisation to dampen the effect of poisoning attacks and shows that the regularisation hyperparameter increases as more malicious data points are injected in the training dataset.

READ FULL TEXT

page 5

page 15

research
06/02/2023

Hyperparameter Learning under Data Poisoning: Analysis of the Influence of Regularization via Multiobjective Bilevel Optimization

Machine Learning (ML) algorithms are vulnerable to poisoning attacks, wh...
research
05/23/2021

Regularization Can Help Mitigate Poisoning Attacks... with the Right Hyperparameters

Machine learning algorithms are vulnerable to poisoning attacks, where a...
research
06/18/2019

Poisoning Attacks with Generative Adversarial Nets

Machine learning algorithms are vulnerable to poisoning attacks: An adve...
research
02/09/2023

Hyperparameter Search Is All You Need For Training-Agnostic Backdoor Robustness

Commoditization and broad adoption of machine learning (ML) technologies...
research
10/02/2022

Optimization for Robustness Evaluation beyond ℓ_p Metrics

Empirical evaluation of deep learning models against adversarial attacks...
research
11/08/2017

LatentPoison - Adversarial Attacks On The Latent Space

Robustness and security of machine learning (ML) systems are intertwined...
research
05/17/2021

RAIDER: Reinforcement-aided Spear Phishing Detector

Spear Phishing is a harmful cyber-attack facing business and individuals...

Please sign up or login with your details

Forgot password? Click here to reset