DeepAI AI Chat
Log In Sign Up

Flying Adversarial Patches: Manipulating the Behavior of Deep Learning-based Autonomous Multirotors

by   Pia Hanfeld, et al.

Autonomous flying robots, e.g. multirotors, often rely on a neural network that makes predictions based on a camera image. These deep learning (DL) models can compute surprising results if applied to input images outside the training domain. Adversarial attacks exploit this fault, for example, by computing small images, so-called adversarial patches, that can be placed in the environment to manipulate the neural network's prediction. We introduce flying adversarial patches, where an image is mounted on another flying robot and therefore can be placed anywhere in the field of view of a victim multirotor. For an effective attack, we compare three methods that simultaneously optimize the adversarial patch and its position in the input image. We perform an empirical validation on a publicly available DL model and dataset for autonomous multirotors. Ultimately, our attacking multirotor would be able to gain full control over the motions of the victim multirotor.


page 1

page 3


Kidnapping Deep Learning-based Multirotors using Optimized Flying Adversarial Patches

Autonomous flying robots, such as multirotors, often rely on deep learni...

IPatch: A Remote Adversarial Patch

Applications such as autonomous vehicles and medical screening use deep ...

Generate (non-software) Bugs to Fool Classifiers

In adversarial attacks intended to confound deep learning models, most s...

Adversarial Attack on Deep Learning-Based Splice Localization

Regarding image forensics, researchers have proposed various approaches ...

Fooling automated surveillance cameras: adversarial patches to attack person detection

Adversarial attacks on machine learning models have seen increasing inte...

Single-Image Camera Response Function Using Prediction Consistency and Gradual Refinement

A few methods have been proposed to estimate the CRF from a single image...

Generating Adversarial yet Inconspicuous Patches with a Single Image

Deep neural networks have been shown vulnerable toadversarial patches, w...