Hyperparameter Search Is All You Need For Training-Agnostic Backdoor Robustness

02/09/2023
by   Eugene Bagdasaryan, et al.
0

Commoditization and broad adoption of machine learning (ML) technologies expose users of these technologies to new security risks. Many models today are based on neural networks. Training and deploying these models for real-world applications involves complex hardware and software pipelines applied to training data from many sources. Models trained on untrusted data are vulnerable to poisoning attacks that introduce "backdoor" functionality. Compromising a fraction of the training data requires few resources from the attacker, but defending against these attacks is a challenge. Although there have been dozens of defenses proposed in the research literature, most of them are expensive to integrate or incompatible with the existing training pipelines. In this paper, we take a pragmatic, developer-centric view and show how practitioners can answer two actionable questions: (1) how robust is my model to backdoor poisoning attacks?, and (2) how can I make it more robust without changing the training pipeline? We focus on the size of the compromised subset of the training data as a universal metric. We propose an easy-to-learn primitive sub-task to estimate this metric, thus providing a baseline on backdoor poisoning. Next, we show how to leverage hyperparameter search - a tool that ML developers already extensively use - to balance the model's accuracy and robustness to poisoning, without changes to the training pipeline. We demonstrate how to use our metric to estimate the robustness of models to backdoor attacks. We then design, implement, and evaluate a multi-stage hyperparameter search method we call Mithridates that strengthens robustness by 3-5x with only a slight impact on the model's accuracy. We show that the hyperparameters found by our method increase robustness against multiple types of backdoor attacks and extend our method to AutoML and federated learning.

READ FULL TEXT

page 1

page 10

research
05/06/2020

Testing the Robustness of AutoML Systems

Automated machine learning (AutoML) systems aim at finding the best mach...
research
06/02/2023

Hyperparameter Learning under Data Poisoning: Analysis of the Influence of Regularization via Multiobjective Bilevel Optimization

Machine Learning (ML) algorithms are vulnerable to poisoning attacks, wh...
research
02/28/2020

Regularisation Can Mitigate Poisoning Attacks: A Novel Analysis Based on Multiobjective Bilevel Optimisation

Machine Learning (ML) algorithms are vulnerable to poisoning attacks, wh...
research
11/23/2018

Dancing in the Dark: Private Multi-Party Machine Learning in an Untrusted Setting

Distributed machine learning (ML) systems today use an unsophisticated t...
research
11/24/2019

On the Robustness of Deep Learning-predicted Contention Models for Network Calculus

The network calculus (NC) analysis takes a simple model consisting of a ...
research
03/12/2019

Exploiting Reuse in Pipeline-Aware Hyperparameter Tuning

Hyperparameter tuning of multi-stage pipelines introduces a significant ...
research
09/30/2022

ImpNet: Imperceptible and blackbox-undetectable backdoors in compiled neural networks

Early backdoor attacks against machine learning set off an arms race in ...

Please sign up or login with your details

Forgot password? Click here to reset