Suppress with a Patch: Revisiting Universal Adversarial Patch Attacks against Object Detection

09/27/2022
by   Svetlana Pavlitskaya, et al.
0

Adversarial patch-based attacks aim to fool a neural network with an intentionally generated noise, which is concentrated in a particular region of an input image. In this work, we perform an in-depth analysis of different patch generation parameters, including initialization, patch size, and especially positioning a patch in an image during training. We focus on the object vanishing attack and run experiments with YOLOv3 as a model under attack in a white-box setting and use images from the COCO dataset. Our experiments have shown, that inserting a patch inside a window of increasing size during training leads to a significant increase in attack strength compared to a fixed position. The best results were obtained when a patch was positioned randomly during training, while patch position additionally varied within a batch.

READ FULL TEXT

page 1

page 3

page 4

page 5

research
07/05/2022

PatchZero: Defending against Adversarial Patch Attacks by Detecting and Zeroing the Patch

Adversarial patch attacks mislead neural networks by injecting adversari...
research
08/22/2023

PatchBackdoor: Backdoor Attack against Deep Neural Networks without Model Modification

Backdoor attack is a major threat to deep learning systems in safety-cri...
research
05/18/2022

Constraining the Attack Space of Machine Learning Models with Distribution Clamping Preprocessing

Preprocessing and outlier detection techniques have both been applied to...
research
08/16/2021

Patch Attack Invariance: How Sensitive are Patch Attacks to 3D Pose?

Perturbation-based attacks, while not physically realizable, have been t...
research
12/12/2022

Carpet-bombing patch: attacking a deep network without usual requirements

Although deep networks have shown vulnerability to evasion attacks, such...
research
09/20/2022

Leveraging Local Patch Differences in Multi-Object Scenes for Generative Adversarial Attacks

State-of-the-art generative model-based attacks against image classifier...
research
08/11/2021

Turning Your Strength against You: Detecting and Mitigating Robust and Universal Adversarial Patch Attack

Adversarial patch attack against image classification deep neural networ...

Please sign up or login with your details

Forgot password? Click here to reset