Physical Passive Patch Adversarial Attacks on Visual Odometry Systems

07/11/2022
by   Yaniv Nemcovsky, et al.
7

Deep neural networks are known to be susceptible to adversarial perturbations – small perturbations that alter the output of the network and exist under strict norm limitations. While such perturbations are usually discussed as tailored to a specific input, a universal perturbation can be constructed to alter the model's output on a set of inputs. Universal perturbations present a more realistic case of adversarial attacks, as awareness of the model's exact input is not required. In addition, the universal attack setting raises the subject of generalization to unseen data, where given a set of inputs, the universal perturbations aim to alter the model's output on out-of-sample data. In this work, we study physical passive patch adversarial attacks on visual odometry-based autonomous navigation systems. A visual odometry system aims to infer the relative camera motion between two corresponding viewpoints, and is frequently used by vision-based autonomous navigation systems to estimate their state. For such navigation systems, a patch adversarial perturbation poses a severe security issue, as it can be used to mislead a system onto some collision course. To the best of our knowledge, we show for the first time that the error margin of a visual odometry model can be significantly increased by deploying patch adversarial attacks in the scene. We provide evaluation on synthetic closed-loop drone navigation data and demonstrate that a comparable vulnerability exists in real data. A reference implementation of the proposed method and the reported experiments is provided at https://github.com/patchadversarialattacks/patchadversarialattacks.

READ FULL TEXT

page 6

page 8

page 9

research
10/15/2020

Generalizing Universal Adversarial Attacks Beyond Additive Perturbations

The previous study has shown that universal adversarial attacks can fool...
research
01/27/2021

Meta Adversarial Training

Recently demonstrated physical-world adversarial attacks have exposed vu...
research
09/15/2021

Universal Adversarial Attack on Deep Learning Based Prognostics

Deep learning-based time series models are being extensively utilized in...
research
12/28/2020

Analysis of Dominant Classes in Universal Adversarial Perturbations

The reasons why Deep Neural Networks are susceptible to being fooled by ...
research
11/17/2019

Smoothed Inference for Adversarially-Trained Models

Deep neural networks are known to be vulnerable to inputs with malicious...
research
07/10/2021

Resilience of Autonomous Vehicle Object Category Detection to Universal Adversarial Perturbations

Due to the vulnerability of deep neural networks to adversarial examples...
research
06/12/2022

Consistent Attack: Universal Adversarial Perturbation on Embodied Vision Navigation

Embodied agents in vision navigation coupled with deep neural networks h...

Please sign up or login with your details

Forgot password? Click here to reset