We Can Always Catch You: Detecting Adversarial Patched Objects WITH or WITHOUT Signature

06/09/2021
by   Bin Liang, et al.
0

Recently, the object detection based on deep learning has proven to be vulnerable to adversarial patch attacks. The attackers holding a specially crafted patch can hide themselves from the state-of-the-art person detectors, e.g., YOLO, even in the physical world. This kind of attack can bring serious security threats, such as escaping from surveillance cameras. In this paper, we deeply explore the detection problems about the adversarial patch attacks to the object detection. First, we identify a leverageable signature of existing adversarial patches from the point of the visualization explanation. A fast signature-based defense method is proposed and demonstrated to be effective. Second, we design an improved patch generation algorithm to reveal the risk that the signature-based way may be bypassed by the techniques emerging in the future. The newly generated adversarial patches can successfully evade the proposed signature-based defense. Finally, we present a novel signature-independent detection method based on the internal content semantics consistency rather than any attack-specific prior knowledge. The fundamental intuition is that the adversarial object can appear locally but disappear globally in an input image. The experiments demonstrate that the signature-independent method can effectively detect the existing and improved attacks. It has also proven to be a general method by detecting unforeseen and even other types of attacks without any attack-specific prior knowledge. The two proposed detection methods can be adopted in different scenarios, and we believe that combining them can offer a comprehensive protection.

READ FULL TEXT

page 2

page 6

page 8

page 10

page 12

page 13

page 24

page 25

research
06/20/2019

On Physical Adversarial Patches for Object Detection

In this paper, we demonstrate a physical adversarial patch attack agains...
research
12/08/2021

Segment and Complete: Defending Object Detectors against Adversarial Patch Attacks with Robust Patch Detection

Object detection plays a key role in many security-critical systems. Adv...
research
03/16/2021

Adversarial YOLO: Defense Human Detection Patch Attacks via Detecting Adversarial Patches

The security of object detection systems has attracted increasing attent...
research
02/03/2023

TextShield: Beyond Successfully Detecting Adversarial Sentences in Text Classification

Adversarial attack serves as a major challenge for neural network models...
research
04/18/2019

Fooling automated surveillance cameras: adversarial patches to attack person detection

Adversarial attacks on machine learning models have seen increasing inte...
research
08/31/2020

Adversarial Patch Camouflage against Aerial Detection

Detection of military assets on the ground can be performed by applying ...
research
05/18/2022

Constraining the Attack Space of Machine Learning Models with Distribution Clamping Preprocessing

Preprocessing and outlier detection techniques have both been applied to...

Please sign up or login with your details

Forgot password? Click here to reset