MACAB: Model-Agnostic Clean-Annotation Backdoor to Object Detection with Natural Trigger in Real-World

09/06/2022
by   Hua Ma, et al.
1

Object detection is the foundation of various critical computer-vision tasks such as segmentation, object tracking, and event detection. To train an object detector with satisfactory accuracy, a large amount of data is required. However, due to the intensive workforce involved with annotating large datasets, such a data curation task is often outsourced to a third party or relied on volunteers. This work reveals severe vulnerabilities of such data curation pipeline. We propose MACAB that crafts clean-annotated images to stealthily implant the backdoor into the object detectors trained on them even when the data curator can manually audit the images. We observe that the backdoor effect of both misclassification and the cloaking are robustly achieved in the wild when the backdoor is activated with inconspicuously natural physical triggers. Backdooring non-classification object detection with clean-annotation is challenging compared to backdooring existing image classification tasks with clean-label, owing to the complexity of having multiple objects within each frame, including victim and non-victim objects. The efficacy of the MACAB is ensured by constructively i abusing the image-scaling function used by the deep learning framework, ii incorporating the proposed adversarial clean image replica technique, and iii combining poison data selection criteria given constrained attacking budget. Extensive experiments demonstrate that MACAB exhibits more than 90 under various real-world scenes. This includes both cloaking and misclassification backdoor effect even restricted with a small attack budget. The poisoned samples cannot be effectively identified by state-of-the-art detection techniques.The comprehensive video demo is at https://youtu.be/MA7L_LpXkp4, which is based on a poison rate of 0.14 YOLOv4 cloaking backdoor and Faster R-CNN misclassification backdoor.

READ FULL TEXT

page 1

page 5

page 7

page 8

page 9

page 10

page 11

page 13

research
01/21/2022

Dangerous Cloaking: Natural Trigger based Backdoor Attacks on Object Detectors in the Physical World

Deep learning models have been shown to be vulnerable to recent backdoor...
research
07/13/2022

Adversarially-Aware Robust Object Detector

Object detection, as a fundamental computer vision task, has achieved a ...
research
08/13/2023

Camouflaged Image Synthesis Is All You Need to Boost Camouflaged Detection

Camouflaged objects that blend into natural scenes pose significant chal...
research
04/10/2019

BAOD: Budget-Aware Object Detection

We study the problem of object detection from a novel perspective in whi...
research
03/29/2022

Interactive Multi-Class Tiny-Object Detection

Annotating tens or hundreds of tiny objects in a given image is laboriou...
research
02/05/2021

DetectorGuard: Provably Securing Object Detectors against Localized Patch Hiding Attacks

State-of-the-art object detectors are vulnerable to localized patch hidi...
research
06/10/2022

Saccade Mechanisms for Image Classification, Object Detection and Tracking

We examine how the saccade mechanism from biological vision can be used ...

Please sign up or login with your details

Forgot password? Click here to reset