Dangerous Cloaking: Natural Trigger based Backdoor Attacks on Object Detectors in the Physical World

01/21/2022
by   Hua Ma, et al.
1

Deep learning models have been shown to be vulnerable to recent backdoor attacks. A backdoored model behaves normally for inputs containing no attacker-secretly-chosen trigger and maliciously for inputs with the trigger. To date, backdoor attacks and countermeasures mainly focus on image classification tasks. And most of them are implemented in the digital world with digital triggers. Besides the classification tasks, object detection systems are also considered as one of the basic foundations of computer vision tasks. However, there is no investigation and understanding of the backdoor vulnerability of the object detector, even in the digital world with digital triggers. For the first time, this work demonstrates that existing object detectors are inherently susceptible to physical backdoor attacks. We use a natural T-shirt bought from a market as a trigger to enable the cloaking effect–the person bounding-box disappears in front of the object detector. We show that such a backdoor can be implanted from two exploitable attack scenarios into the object detector, which is outsourced or fine-tuned through a pretrained model. We have extensively evaluated three popular object detection algorithms: anchor-based Yolo-V3, Yolo-V4, and anchor-free CenterNet. Building upon 19 videos shot in real-world scenes, we confirm that the backdoor attack is robust against various factors: movement, distance, angle, non-rigid deformation, and lighting. Specifically, the attack success rate (ASR) in most videos is 100 model is the same as its clean counterpart. The latter implies that it is infeasible to detect the backdoor behavior merely through a validation set. The averaged ASR still remains sufficiently high to be 78 attack scenarios evaluated on CenterNet. See the demo video on https://youtu.be/Q3HOF4OobbY.

READ FULL TEXT

page 1

page 2

page 4

page 5

page 7

page 8

page 11

page 12

research
09/06/2022

MACAB: Model-Agnostic Clean-Annotation Backdoor to Object Detection with Natural Trigger in Real-World

Object detection is the foundation of various critical computer-vision t...
research
07/19/2023

Backdoor Attack against Object Detection with Clean Annotation

Deep neural networks (DNNs) have shown unprecedented success in object d...
research
06/14/2023

X-Detect: Explainable Adversarial Patch Detection for Object Detectors in Retail

Object detection models, which are widely used in various domains (such ...
research
01/22/2022

Parallel Rectangle Flip Attack: A Query-based Black-box Attack against Object Detection

Object detection has been widely used in many safety-critical tasks, suc...
research
04/27/2023

Detection of Adversarial Physical Attacks in Time-Series Image Data

Deep neural networks (DNN) have become a common sensing modality in auto...
research
02/05/2021

DetectorGuard: Provably Securing Object Detectors against Localized Patch Hiding Attacks

State-of-the-art object detectors are vulnerable to localized patch hidi...
research
08/19/2020

CCA: Exploring the Possibility of Contextual Camouflage Attack on Object Detection

Deep neural network based object detection hasbecome the cornerstone of ...

Please sign up or login with your details

Forgot password? Click here to reset