Fooling automated surveillance cameras: adversarial patches to attack person detection

04/18/2019
by   Simen Thys, et al.
30

Adversarial attacks on machine learning models have seen increasing interest in the past years. By making only subtle changes to the input of a convolutional neural network, the output of the network can be swayed to output a completely different result. The first attacks did this by changing pixel values of an input image slightly to fool a classifier to output the wrong class. Other approaches have tried to learn "patches" that can be applied to an object to fool detectors and classifiers. Some of these approaches have also shown that these attacks are feasible in the real-world, i.e. by modifying an object and filming it with a video camera. However, all of these approaches target classes that contain almost no intra-class variety (e.g. stop signs). The known structure of the object is then used to generate an adversarial patch on top of it. In this paper, we present an approach to generate adversarial patches to targets with lots of intra-class variety, namely persons. The goal is to generate a patch that is able successfully hide a person from a person detector. An attack that could for instance be used maliciously to circumvent surveillance systems, intruders can sneak around undetected by holding a small cardboard plate in front of their body aimed towards the surveillance camera. From our results we can see that our system is able significantly lower the accuracy of a person detector. Our approach also functions well in real-life scenarios where the patch is filmed by a camera. To the best of our knowledge we are the first to attempt this kind of attack on targets with a high level of intra-class variety like persons.

READ FULL TEXT

page 1

page 3

page 4

page 6

page 7

research
10/25/2020

Dynamic Adversarial Patch for Evading Object Detection Models

Recent research shows that neural networks models used for computer visi...
research
05/10/2022

Using Frequency Attention to Make Adversarial Patch Powerful Against Person Detector

Deep neural networks (DNNs) are vulnerable to adversarial attacks. In pa...
research
08/31/2020

Adversarial Patch Camouflage against Aerial Detection

Detection of military assets on the ground can be performed by applying ...
research
05/19/2023

DAP: A Dynamic Adversarial Patch for Evading Person Detectors

In this paper, we present a novel approach for generating naturalistic a...
research
11/16/2022

Attacking Object Detector Using A Universal Targeted Label-Switch Patch

Adversarial attacks against deep learning-based object detectors (ODs) h...
research
06/09/2021

We Can Always Catch You: Detecting Adversarial Patched Objects WITH or WITHOUT Signature

Recently, the object detection based on deep learning has proven to be v...
research
05/22/2023

Flying Adversarial Patches: Manipulating the Behavior of Deep Learning-based Autonomous Multirotors

Autonomous flying robots, e.g. multirotors, often rely on a neural netwo...

Please sign up or login with your details

Forgot password? Click here to reset