Parallel Rectangle Flip Attack: A Query-based Black-box Attack against Object Detection

01/22/2022
by   Siyuan Liang, et al.
0

Object detection has been widely used in many safety-critical tasks, such as autonomous driving. However, its vulnerability to adversarial examples has not been sufficiently studied, especially under the practical scenario of black-box attacks, where the attacker can only access the query feedback of predicted bounding-boxes and top-1 scores returned by the attacked model. Compared with black-box attack to image classification, there are two main challenges in black-box attack to detection. Firstly, even if one bounding-box is successfully attacked, another sub-optimal bounding-box may be detected near the attacked bounding-box. Secondly, there are multiple bounding-boxes, leading to very high attack cost. To address these challenges, we propose a Parallel Rectangle Flip Attack (PRFA) via random search. We explain the difference between our method with other attacks in Fig. <ref>. Specifically, we generate perturbations in each rectangle patch to avoid sub-optimal detection near the attacked region. Besides, utilizing the observation that adversarial perturbations mainly locate around objects' contours and critical points under white-box attacks, the search space of attacked rectangles is reduced to improve the attack efficiency. Moreover, we develop a parallel mechanism of attacking multiple rectangles simultaneously to further accelerate the attack process. Extensive experiments demonstrate that our method can effectively and efficiently attack various popular object detectors, including anchor-based and anchor-free, and generate transferable adversarial examples.

READ FULL TEXT

page 2

page 4

page 8

research
09/16/2022

A Large-scale Multiple-objective Method for Black-box Attack against Object Detection

Recent studies have shown that detectors based on deep models are vulner...
research
06/05/2020

Pick-Object-Attack: Type-Specific Adversarial Attack for Object Detection

Many recent studies have shown that deep neural models are vulnerable to...
research
11/16/2022

T-SEA: Transfer-based Self-Ensemble Attack on Object Detection

Compared to query-based black-box attacks, transfer-based black-box atta...
research
03/27/2021

IoU Attack: Towards Temporally Coherent Black-Box Adversarial Attack for Visual Object Tracking

Adversarial attack arises due to the vulnerability of deep neural networ...
research
10/16/2022

Object-Attentional Untargeted Adversarial Attack

Deep neural networks are facing severe threats from adversarial attacks....
research
02/06/2019

Daedalus: Breaking Non-Maximum Suppression in Object Detection via Adversarial Examples

We demonstrated that Non-Maximum Suppression (NMS), which is commonly us...
research
01/21/2022

Dangerous Cloaking: Natural Trigger based Backdoor Attacks on Object Detectors in the Physical World

Deep learning models have been shown to be vulnerable to recent backdoor...

Please sign up or login with your details

Forgot password? Click here to reset