Turning Your Strength against You: Detecting and Mitigating Robust and Universal Adversarial Patch Attack

08/11/2021
by   Zitao Chen, et al.
11

Adversarial patch attack against image classification deep neural networks (DNNs), in which the attacker can inject arbitrary distortions within a bounded region of an image, is able to generate adversarial perturbations that are robust (i.e., remain adversarial in physical world) and universal (i.e., remain adversarial on any input). It is thus important to detect and mitigate such attack to ensure the security of DNNs. This work proposes Jujutsu, a technique to detect and mitigate robust and universal adversarial patch attack. Jujutsu leverages the universal property of the patch attack for detection. It uses explainable AI technique to identify suspicious features that are potentially malicious, and verify their maliciousness by transplanting the suspicious features to new images. An adversarial patch continues to exhibit the malicious behavior on the new images and thus can be detected based on prediction consistency. Jujutsu leverages the localized nature of the patch attack for mitigation, by randomly masking the suspicious features to "remove" adversarial perturbations. However, the network might fail to classify the images as some of the contents are removed (masked). Therefore, Jujutsu uses image inpainting for synthesizing alternative contents from the pixels that are masked, which can reconstruct the "clean" image for correct prediction. We evaluate Jujutsu on five DNNs on two datasets, and show that Jujutsu achieves superior performance and significantly outperforms existing techniques. Jujutsu can further defend against various variants of the basic attack, including 1) physical-world attack; 2) attacks that target diverse classes; 3) attacks that use patches in different shapes and 4) adaptive attacks.

READ FULL TEXT

page 1

page 3

page 11

page 15

research
10/07/2020

Double Targeted Universal Adversarial Perturbations

Despite their impressive performance, deep neural networks (DNNs) are wi...
research
05/16/2021

Real-time Detection of Practical Universal Adversarial Perturbations

Universal Adversarial Perturbations (UAPs) are a prominent class of adve...
research
07/02/2023

Query-Efficient Decision-based Black-Box Patch Attack

Deep neural networks (DNNs) have been showed to be highly vulnerable to ...
research
09/27/2022

Suppress with a Patch: Revisiting Universal Adversarial Patch Attacks against Object Detection

Adversarial patch-based attacks aim to fool a neural network with an int...
research
05/19/2020

Patch Attack for Automatic Check-out

Adversarial examples are inputs with imperceptible perturbations that ea...
research
03/24/2020

PoisHygiene: Detecting and Mitigating Poisoning Attacks in Neural Networks

The black-box nature of deep neural networks (DNNs) facilitates attacker...
research
04/26/2021

PatchGuard++: Efficient Provable Attack Detection against Adversarial Patches

An adversarial patch can arbitrarily manipulate image pixels within a re...

Please sign up or login with your details

Forgot password? Click here to reset