Spatiotemporally Constrained Action Space Attacks on Deep Reinforcement Learning Agents

09/05/2019
by   Xian Yeow Lee, et al.
8

Robustness of Deep Reinforcement Learning (DRL) algorithms towards adversarial attacks in real world applications such as those deployed in cyber-physical systems (CPS) are of increasing concern. Numerous studies have investigated the mechanisms of attacks on the RL agent's state space. Nonetheless, attacks on the RL agent's action space (AS) (corresponding to actuators in engineering systems) are equally perverse; such attacks are relatively less studied in the ML literature. In this work, we first frame the problem as an optimization problem of minimizing the cumulative reward of an RL agent with decoupled constraints as the budget of attack. We propose a white-box Myopic Action Space (MAS) attack algorithm that distributes the attacks across the action space dimensions. Next, we reformulate the optimization problem above with the same objective function, but with a temporally coupled constraint on the attack budget to take into account the approximated dynamics of the agent. This leads to the white-box Look-ahead Action Space (LAS) attack algorithm that distributes the attacks across the action and temporal dimensions. Our results shows that using the same amount of resources, the LAS attack deteriorates the agent's performance significantly more than the MAS attack. This reveals the possibility that with limited resource, an adversary can utilize the agent's dynamics to malevolently craft attacks that causes the agent to fail. Additionally, we leverage these attack strategies as a possible tool to gain insights on the potential vulnerabilities of DRL agents.

READ FULL TEXT

page 6

page 7

research
12/11/2017

Robust Deep Reinforcement Learning with Adversarial Attacks

This paper proposes adversarial attacks for Reinforcement Learning (RL) ...
research
04/01/2023

Recover Triggered States: Protect Model Against Backdoor Attack in Reinforcement Learning

A backdoor attack allows a malicious user to manipulate the environment ...
research
05/02/2021

BACKDOORL: Backdoor Attack against Competitive Reinforcement Learning

Recent research has confirmed the feasibility of backdoor attacks in dee...
research
05/14/2020

Stealthy and Efficient Adversarial Attacks against Deep Reinforcement Learning

Adversarial attacks against conventional Deep Learning (DL) systems and ...
research
07/14/2020

Robustifying Reinforcement Learning Agents via Action Space Adversarial Training

Adoption of machine learning (ML)-enabled cyber-physical systems (CPS) a...
research
02/03/2023

Deep Reinforcement Learning for Cyber System Defense under Dynamic Adversarial Uncertainties

Development of autonomous cyber system defense strategies and action rec...
research
05/18/2023

Black-Box Targeted Reward Poisoning Attack Against Online Deep Reinforcement Learning

We propose the first black-box targeted attack against online deep reinf...

Please sign up or login with your details

Forgot password? Click here to reset