Kidnapping Deep Learning-based Multirotors using Optimized Flying Adversarial Patches

08/01/2023
by   Pia Hanfeld, et al.
0

Autonomous flying robots, such as multirotors, often rely on deep learning models that makes predictions based on a camera image, e.g. for pose estimation. These models can predict surprising results if applied to input images outside the training domain. This fault can be exploited by adversarial attacks, for example, by computing small images, so-called adversarial patches, that can be placed in the environment to manipulate the neural network's prediction. We introduce flying adversarial patches, where multiple images are mounted on at least one other flying robot and therefore can be placed anywhere in the field of view of a victim multirotor. By introducing the attacker robots, the system is extended to an adversarial multi-robot system. For an effective attack, we compare three methods that simultaneously optimize multiple adversarial patches and their position in the input image. We show that our methods scale well with the number of adversarial patches. Moreover, we demonstrate physical flights with two robots, where we employ a novel attack policy that uses the computed adversarial patches to kidnap a robot that was supposed to follow a human.

READ FULL TEXT

page 1

page 3

research
05/22/2023

Flying Adversarial Patches: Manipulating the Behavior of Deep Learning-based Autonomous Multirotors

Autonomous flying robots, e.g. multirotors, often rely on a neural netwo...
research
11/20/2019

Generate (non-software) Bugs to Fool Classifiers

In adversarial attacks intended to confound deep learning models, most s...
research
04/22/2020

Live Trojan Attacks on Deep Neural Networks

Like all software systems, the execution of deep learning models is dict...
research
04/30/2021

IPatch: A Remote Adversarial Patch

Applications such as autonomous vehicles and medical screening use deep ...
research
12/06/2018

Towards Hiding Adversarial Examples from Network Interpretation

Deep networks have been shown to be fooled rather easily using adversari...
research
02/27/2023

CBA: Contextual Background Attack against Optical Aerial Detection in the Physical World

Patch-based physical attacks have increasingly aroused concerns. Howev...
research
03/05/2018

Predicting Out-of-View Feature Points for Model-Based Camera Pose Estimation

In this work we present a novel framework that uses deep learning to pre...

Please sign up or login with your details

Forgot password? Click here to reset