Consistent Attack: Universal Adversarial Perturbation on Embodied Vision Navigation

06/12/2022
by   You Qiaoben, et al.
0

Embodied agents in vision navigation coupled with deep neural networks have attracted increasing attention. However, deep neural networks are vulnerable to malicious adversarial noises, which may potentially cause catastrophic failures in Embodied Vision Navigation. Among these adversarial noises, universal adversarial perturbations (UAP), i.e., the image-agnostic perturbation applied on each frame received by the agent, are more critical for Embodied Vision Navigation since they are computation-efficient and application-practical during the attack. However, existing UAP methods do not consider the system dynamics of Embodied Vision Navigation. For extending UAP in the sequential decision setting, we formulate the disturbed environment under the universal noise δ, as a δ-disturbed Markov Decision Process (δ-MDP). Based on the formulation, we analyze the properties of δ-MDP and propose two novel Consistent Attack methods for attacking Embodied agents, which first consider the dynamic of the MDP by estimating the disturbed Q function and the disturbed distribution. In spite of victim models, our Consistent Attack can cause a significant drop in the performance for the Goalpoint task in habitat. Extensive experimental results indicate that there exist potential risks for applying Embodied Vision Navigation methods to the real world.

READ FULL TEXT

page 1

page 6

research
11/18/2021

Attacking Deep Learning AI Hardware with Universal Adversarial Perturbation

Universal Adversarial Perturbations are image-agnostic and model-indepen...
research
12/01/2019

A Method for Computing Class-wise Universal Adversarial Perturbations

We present an algorithm for computing class-specific universal adversari...
research
04/28/2023

Topic-oriented Adversarial Attacks against Black-box Neural Ranking Models

Neural ranking models (NRMs) have attracted considerable attention in in...
research
03/09/2023

Decision-BADGE: Decision-based Adversarial Batch Attack with Directional Gradient Estimation

The vulnerability of deep neural networks to adversarial examples has le...
research
11/19/2021

Meta Adversarial Perturbations

A plethora of attack methods have been proposed to generate adversarial ...
research
07/11/2022

Physical Passive Patch Adversarial Attacks on Visual Odometry Systems

Deep neural networks are known to be susceptible to adversarial perturba...
research
09/24/2018

Is Ordered Weighted ℓ_1 Regularized Regression Robust to Adversarial Perturbation? A Case Study on OSCAR

Many state-of-the-art machine learning models such as deep neural networ...

Please sign up or login with your details

Forgot password? Click here to reset