A Systematic Evaluation of Backdoor Trigger Characteristics in Image Classification

02/03/2023
by   Gorka Abad, et al.
0

Deep learning achieves outstanding results in many machine learning tasks. Nevertheless, it is vulnerable to backdoor attacks that modify the training set to embed a secret functionality in the trained model. The modified training samples have a secret property, i.e., a trigger. At inference time, the secret functionality is activated when the input contains the trigger, while the model functions correctly in other cases. While there are many known backdoor attacks (and defenses), deploying a stealthy attack is still far from trivial. Successfully creating backdoor triggers heavily depends on numerous parameters. Unfortunately, research has not yet determined which parameters contribute most to the attack performance. This paper systematically analyzes the most relevant parameters for the backdoor attacks, i.e., trigger size, position, color, and poisoning rate. Using transfer learning, which is very common in computer vision, we evaluate the attack on numerous state-of-the-art models (ResNet, VGG, AlexNet, and GoogLeNet) and datasets (MNIST, CIFAR10, and TinyImageNet). Our attacks cover the majority of backdoor settings in research, providing concrete directions for future works. Our code is publicly available to facilitate the reproducibility of our results.

READ FULL TEXT

page 15

page 16

page 17

page 18

research
10/06/2020

Downscaling Attack and Defense: Turning What You See Back Into What You Get

The resizing of images, which is typically a required part of preprocess...
research
03/04/2022

Dynamic Backdoors with Global Average Pooling

Outsourced training and machine learning as a service have resulted in n...
research
10/13/2021

Traceback of Data Poisoning Attacks in Neural Networks

In adversarial machine learning, new defenses against attacks on deep le...
research
12/20/2020

Color Channel Perturbation Attacks for Fooling Convolutional Neural Networks and A Defense Against Such Attacks

The Convolutional Neural Networks (CNNs) have emerged as a very powerful...
research
09/21/2018

Adversarial Binaries for Authorship Identification

Binary code authorship identification determines authors of a binary pro...
research
05/07/2023

Pick your Poison: Undetectability versus Robustness in Data Poisoning Attacks against Deep Image Classification

Deep image classification models trained on large amounts of web-scraped...
research
01/10/2020

Backdoor Attacks against Transfer Learning with Pre-trained Deep Learning Models

Transfer learning, that transfer the learned knowledge of pre-trained Te...

Please sign up or login with your details

Forgot password? Click here to reset