Natural Backdoor Datasets

06/21/2022
by   Emily Wenger, et al.
0

Extensive literature on backdoor poison attacks has studied attacks and defenses for backdoors using "digital trigger patterns." In contrast, "physical backdoors" use physical objects as triggers, have only recently been identified, and are qualitatively different enough to resist all defenses targeting digital trigger backdoors. Research on physical backdoors is limited by access to large datasets containing real images of physical objects co-located with targets of classification. Building these datasets is time- and labor-intensive. This works seeks to address the challenge of accessibility for research on physical backdoor attacks. We hypothesize that there may be naturally occurring physically co-located objects already present in popular datasets such as ImageNet. Once identified, a careful relabeling of these data can transform them into training samples for physical backdoor attacks. We propose a method to scalably identify these subsets of potential triggers in existing datasets, along with the specific classes they can poison. We call these naturally occurring trigger-class subsets natural backdoor datasets. Our techniques successfully identify natural backdoors in widely-available datasets, and produce models behaviorally equivalent to those trained on manually curated datasets. We release our code to allow the research community to create their own datasets for research on physical backdoor attacks.

READ FULL TEXT

page 2

page 7

page 9

page 10

page 11

page 14

page 17

page 18

research
03/09/2022

Social Engineering Attacks and Defenses in the Physical World vs. Cyberspace: A Contrast Study

Social engineering attacks are phenomena that are equally applicable to ...
research
11/03/2022

Physically Adversarial Attacks and Defenses in Computer Vision: A Survey

Although Deep Neural Networks (DNNs) have been widely applied in various...
research
01/26/2021

Defenses Against Multi-Sticker Physical Domain Attacks on Classifiers

Recently, physical domain adversarial attacks have drawn significant att...
research
06/25/2020

Backdoor Attacks on Facial Recognition in the Physical World

Backdoor attacks embed hidden malicious behaviors inside deep neural net...
research
10/16/2018

Projecting Trouble: Light Based Adversarial Attacks on Deep Learning Classifiers

This work demonstrates a physical attack on a deep learning image classi...
research
12/20/2019

Segmentations-Leak: Membership Inference Attacks and Defenses in Semantic Image Segmentation

Today's success of state of the art methods for semantic segmentation is...
research
09/15/2020

Data Poisoning Attacks on Regression Learning and Corresponding Defenses

Adversarial data poisoning is an effective attack against machine learni...

Please sign up or login with your details

Forgot password? Click here to reset