Regularization Can Help Mitigate Poisoning Attacks... with the Right Hyperparameters

05/23/2021
by   Javier Carnerero-Cano, et al.
6

Machine learning algorithms are vulnerable to poisoning attacks, where a fraction of the training data is manipulated to degrade the algorithms' performance. We show that current approaches, which typically assume that regularization hyperparameters remain constant, lead to an overly pessimistic view of the algorithms' robustness and of the impact of regularization. We propose a novel optimal attack formulation that considers the effect of the attack on the hyperparameters, modelling the attack as a minimax bilevel optimization problem. This allows to formulate optimal attacks, select hyperparameters and evaluate robustness under worst case conditions. We apply this formulation to logistic regression using L_2 regularization, empirically show the limitations of previous strategies and evidence the benefits of using L_2 regularization to dampen the effect of poisoning attacks.

READ FULL TEXT
research
06/02/2023

Hyperparameter Learning under Data Poisoning: Analysis of the Influence of Regularization via Multiobjective Bilevel Optimization

Machine Learning (ML) algorithms are vulnerable to poisoning attacks, wh...
research
02/28/2020

Regularisation Can Mitigate Poisoning Attacks: A Novel Analysis Based on Multiobjective Bilevel Optimisation

Machine Learning (ML) algorithms are vulnerable to poisoning attacks, wh...
research
02/14/2018

Stealing Hyperparameters in Machine Learning

Hyperparameters are critical in machine learning, as different hyperpara...
research
01/29/2020

Regularization Helps with Mitigating Poisoning Attacks: Distributionally-Robust Machine Learning Using the Wasserstein Distance

We use distributionally-robust optimization for machine learning to miti...
research
02/07/2016

Hyperparameter optimization with approximate gradient

Most models in machine learning contain at least one hyperparameter to c...
research
06/16/2022

I Know What You Trained Last Summer: A Survey on Stealing Machine Learning Models and Defences

Machine Learning-as-a-Service (MLaaS) has become a widespread paradigm, ...
research
05/29/2018

Why Botnets Work: Distributed Brute-Force Attacks Need No Synchronization

In September 2017, McAffee Labs quarterly report estimated that brute fo...

Please sign up or login with your details

Forgot password? Click here to reset