Backdoor Attack against Object Detection with Clean Annotation

07/19/2023
by   Yize Cheng, et al.
0

Deep neural networks (DNNs) have shown unprecedented success in object detection tasks. However, it was also discovered that DNNs are vulnerable to multiple kinds of attacks, including Backdoor Attacks. Through the attack, the attacker manages to embed a hidden backdoor into the DNN such that the model behaves normally on benign data samples, but makes attacker-specified judgments given the occurrence of a predefined trigger. Although numerous backdoor attacks have been experimented on image classification, backdoor attacks on object detection tasks have not been properly investigated and explored. As object detection has been adopted as an important module in multiple security-sensitive applications such as autonomous driving, backdoor attacks on object detection could pose even more severe threats. Inspired by the inherent property of deep learning-based object detectors, we propose a simple yet effective backdoor attack method against object detection without modifying the ground truth annotations, specifically focusing on the object disappearance attack and object generation attack. Extensive experiments and ablation studies prove the effectiveness of our attack on two benchmark object detection datasets, PASCAL VOC07+12 and MSCOCO, on which we achieve an attack success rate of more than 92

READ FULL TEXT

page 1

page 3

page 7

research
05/28/2022

BadDet: Backdoor Attacks on Object Detection

Deep learning models have been deployed in numerous real-world applicati...
research
04/07/2021

An Object Detection based Solver for Google's Image reCAPTCHA v2

Previous work showed that reCAPTCHA v2's image challenges could be solve...
research
06/03/2023

Mitigating Backdoor Attack Via Prerequisite Transformation

In recent years, with the successful application of DNN in fields such a...
research
08/19/2020

CCA: Exploring the Possibility of Contextual Camouflage Attack on Object Detection

Deep neural network based object detection hasbecome the cornerstone of ...
research
01/21/2022

Dangerous Cloaking: Natural Trigger based Backdoor Attacks on Object Detectors in the Physical World

Deep learning models have been shown to be vulnerable to recent backdoor...
research
09/06/2019

Testing Deep Learning Models for Image Analysis Using Object-Relevant Metamorphic Relations

Deep learning models are widely used for image analysis. While they offe...
research
09/19/2020

It's Raining Cats or Dogs? Adversarial Rain Attack on DNN Perception

Rain is a common phenomenon in nature and an essential factor for many d...

Please sign up or login with your details

Forgot password? Click here to reset