DPatch: Attacking Object Detectors with Adversarial Patches

06/05/2018
by   Xin Liu, et al.
0

Object detectors have witnessed great progress in recent years and have been widely deployed in various important real-world scenarios, such as autonomous driving and face recognition. Therefore, it is increasingly vital to investigate the vulnerability of modern object detectors to different types of attacks. In this work, we demonstrate that actually many mainstream detectors (e.g. Faster R-CNN) can be hacked by a tiny adversarial patch. It is a non-trivial task since the original adversarial patch method can only be applied to image-level classifiers and is not capable to deal with the region proposals involved in modern detectors. Instead, here we iteratively evolve a tiny patch inside the input image so that it invalidates both proposal generation and the subsequent region classification of Faster R-CNN, resulting in a successful attack. Specifically, the proposed adversarial patch (namely, DPatch) can be trained toward any targeted class so that all the objects in any region of the scene will be classified as that targeted class. One interesting observation is that the efficiency of DPatch is not influenced by its location: no matter where it resides, the patch can always invalidate RCNN after the same amount of iterations. Furthermore, we find that different target classes have different degrees of vulnerability; and an DPatch with a larger size can perform the attack more effectively. Extensive experiments show that our DPatch can reduce the mAP of a state-of-the-art detector on PASCAL VOC 2012 from 71 to 25

READ FULL TEXT

page 4

page 8

page 10

page 12

research
06/05/2018

AdvDetPatch: Attacking Object Detectors with Adversarial Patches

Object detectors have witnessed great progress in recent years and have ...
research
12/23/2020

The Translucent Patch: A Physical and Universal Attack on Object Detectors

Physical adversarial attacks against object detectors have seen increasi...
research
11/16/2022

Attacking Object Detector Using A Universal Targeted Label-Switch Patch

Adversarial attacks against deep learning-based object detectors (ODs) h...
research
03/02/2023

APARATE: Adaptive Adversarial Patch for CNN-based Monocular Depth Estimation for Autonomous Navigation

In recent years, monocular depth estimation (MDE) has witnessed a substa...
research
03/23/2021

RPATTACK: Refined Patch Attack on General Object Detectors

Nowadays, general object detectors like YOLO and Faster R-CNN as well as...
research
07/16/2023

Diffusion to Confusion: Naturalistic Adversarial Patch Generation Based on Diffusion Model for Object Detector

Many physical adversarial patch generation methods are widely proposed t...
research
09/16/2018

Attacking Object Detectors via Imperceptible Patches on Background

Deep neural networks have been proven vulnerable against adversarial per...

Please sign up or login with your details

Forgot password? Click here to reset