Daedalus: Breaking Non-Maximum Suppression in Object Detection via Adversarial Examples

02/06/2019
by   Derui Wang, et al.
0

We demonstrated that Non-Maximum Suppression (NMS), which is commonly used in object detection tasks to filter redundant detection results, is no longer secure. NMS has always been an integral part of object detection algorithms. Currently, Fully Convolutional Network (FCN) is widely used as the backbone architecture of object detection models. Given an input instance, since FCN generates end-to-end detection results in a single stage, it outputs a large number of raw detection boxes. These bounding boxes are then filtered by NMS to make the final detection results. In this paper, we propose an adversarial example attack which triggers malfunctioning of NMS in the end-to-end object detection models. Our attack, namely Daedalus, manipulates the detection box regression values to compress the dimensions of detection boxes. Henceforth, NMS will no longer be able to filter redundant detection boxes correctly. And as a result, the final detection output contains extremely dense false positives. This can be fatal for many object detection applications such as autonomous vehicle and smart manufacturing industry. Our attack can be applied to different end-to-end object detection models. Furthermore, we suggest crafting robust adversarial examples by using an ensemble of popular detection models as the substitutes. Considering that model reusing is commonly seen in real-world object detection scenarios, Daedalus examples crafted based on an ensemble of substitutes can launch attacks without knowing the details of the victim models. Our experiments demonstrate that our attack effectively stops NMS from filtering redundant bounding boxes. As the evaluation results suggest, Daedalus increases the false positive rate in detection results to 99.9 average precision scores to 0, while maintaining a low cost of distortion on the original inputs.

READ FULL TEXT

page 2

page 3

page 4

page 5

page 6

page 7

page 12

page 13

research
09/16/2015

DenseBox: Unifying Landmark Localization with End to End Object Detection

How can a single fully convolutional neural network (FCN) perform on obj...
research
06/05/2020

Pick-Object-Attack: Type-Specific Adversarial Attack for Object Detection

Many recent studies have shown that deep neural models are vulnerable to...
research
01/12/2020

Membership Inference Attacks Against Object Detection Models

Machine learning models can leak information about the dataset they trai...
research
01/22/2022

Parallel Rectangle Flip Attack: A Query-based Black-box Attack against Object Detection

Object detection has been widely used in many safety-critical tasks, suc...
research
07/28/2020

Quantum-soft QUBO Suppression for Accurate Object Detection

Non-maximum suppression (NMS) has been adopted by default for removing r...
research
08/15/2018

Never Mind the Bounding Boxes, Here's the SAND Filters

Perception is the main bottleneck to perform autonomous mobile manipulat...
research
03/25/2020

Safety-Aware Hardening of 3D Object Detection Neural Network Systems

We study how state-of-the-art neural networks for 3D object detection us...

Please sign up or login with your details

Forgot password? Click here to reset