Untargeted Backdoor Attack against Object Detection

11/02/2022
by   Chengxiao Luo, et al.
0

Recent studies revealed that deep neural networks (DNNs) are exposed to backdoor threats when training with third-party resources (such as training samples or backbones). The backdoored model has promising performance in predicting benign samples, whereas its predictions can be maliciously manipulated by adversaries based on activating its backdoors with pre-defined trigger patterns. Currently, most of the existing backdoor attacks were conducted on the image classification under the targeted manner. In this paper, we reveal that these threats could also happen in object detection, posing threatening risks to many mission-critical applications (e.g., pedestrian detection and intelligent surveillance systems). Specifically, we design a simple yet effective poison-only backdoor attack in an untargeted manner, based on task characteristics. We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns. We conduct extensive experiments on the benchmark dataset, showing its effectiveness in both digital and physical-world settings and its resistance to potential defenses.

READ FULL TEXT

page 2

page 3

research
11/02/2022

BATT: Backdoor Attack with Transformation-based Triggers

Deep neural networks (DNNs) are vulnerable to backdoor attacks. The back...
research
02/24/2023

Defending Against Backdoor Attacks by Layer-wise Feature Analysis

Training deep neural networks (DNNs) usually requires massive training d...
research
01/31/2022

Few-Shot Backdoor Attacks on Visual Object Tracking

Visual object tracking (VOT) has been widely adopted in mission-critical...
research
08/04/2022

MOVE: Effective and Harmless Ownership Verification via Embedded External Features

Currently, deep neural networks (DNNs) are widely adopted in different a...
research
11/02/2022

Backdoor Defense via Suppressing Model Shortcuts

Recent studies have demonstrated that deep neural networks (DNNs) are vu...
research
10/20/2021

Moiré Attack (MA): A New Potential Risk of Screen Photos

Images, captured by a camera, play a critical role in training Deep Neur...
research
02/01/2023

BackdoorBox: A Python Toolbox for Backdoor Learning

Third-party resources (e.g., samples, backbones, and pre-trained models)...

Please sign up or login with your details

Forgot password? Click here to reset