Systematic Evaluation of Backdoor Data Poisoning Attacks on Image Classifiers

04/24/2020
by   Loc Truong, et al.
0

Backdoor data poisoning attacks have recently been demonstrated in computer vision research as a potential safety risk for machine learning (ML) systems. Traditional data poisoning attacks manipulate training data to induce unreliability of an ML model, whereas backdoor data poisoning attacks maintain system performance unless the ML model is presented with an input containing an embedded "trigger" that provides a predetermined response advantageous to the adversary. Our work builds upon prior backdoor data-poisoning research for ML image classifiers and systematically assesses different experimental conditions including types of trigger patterns, persistence of trigger patterns during retraining, poisoning strategies, architectures (ResNet-50, NasNet, NasNet-Mobile), datasets (Flowers, CIFAR-10), and potential defensive regularization techniques (Contrastive Loss, Logit Squeezing, Manifold Mixup, Soft-Nearest-Neighbors Loss). Experiments yield four key findings. First, the success rate of backdoor poisoning attacks varies widely, depending on several factors, including model architecture, trigger pattern and regularization technique. Second, we find that poisoned models are hard to detect through performance inspection alone. Third, regularization typically reduces backdoor success rate, although it can have no effect or even slightly increase it, depending on the form of regularization. Finally, backdoors inserted through data poisoning can be rendered ineffective after just a few epochs of additional training on a small set of clean data without affecting the model's performance.

READ FULL TEXT
research
06/02/2023

Hyperparameter Learning under Data Poisoning: Analysis of the Influence of Regularization via Multiobjective Bilevel Optimization

Machine Learning (ML) algorithms are vulnerable to poisoning attacks, wh...
research
06/02/2023

Poisoning Network Flow Classifiers

As machine learning (ML) classifiers increasingly oversee the automated ...
research
06/24/2020

Subpopulation Data Poisoning Attacks

Machine learning (ML) systems are deployed in critical settings, but the...
research
03/07/2020

Dynamic Backdoor Attacks Against Machine Learning Models

Machine learning (ML) has made tremendous progress during the past decad...
research
05/20/2022

SafeNet: Mitigating Data Poisoning Attacks on Private Machine Learning

Secure multiparty computation (MPC) has been proposed to allow multiple ...
research
08/25/2017

Modular Learning Component Attacks: Today's Reality, Tomorrow's Challenge

Many of today's machine learning (ML) systems are not built from scratch...
research
03/07/2023

Exploring the Limits of Indiscriminate Data Poisoning Attacks

Indiscriminate data poisoning attacks aim to decrease a model's test acc...

Please sign up or login with your details

Forgot password? Click here to reset