Robust Reinforcement Learning on State Observations with Learned Optimal Adversary

01/21/2021
by   Huan Zhang, et al.
0

We study the robustness of reinforcement learning (RL) with adversarially perturbed state observations, which aligns with the setting of many adversarial attacks to deep reinforcement learning (DRL) and is also important for rolling out real-world RL agent under unpredictable sensing noise. With a fixed agent policy, we demonstrate that an optimal adversary to perturb state observations can be found, which is guaranteed to obtain the worst case agent reward. For DRL settings, this leads to a novel empirical adversarial attack to RL agents via a learned adversary that is much stronger than previous ones. To enhance the robustness of an agent, we propose a framework of alternating training with learned adversaries (ATLA), which trains an adversary online together with the agent using policy gradient following the optimal adversarial attack framework. Additionally, inspired by the analysis of state-adversarial Markov decision process (SA-MDP), we show that past states and actions (history) can be useful for learning a robust agent, and we empirically find a LSTM based policy can be more robust under adversaries. Empirical evaluations on a few continuous control environments show that ATLA achieves state-of-the-art performance under strong adversaries. Our code is available at https://github.com/huanzhang12/ATLA_robust_RL.

READ FULL TEXT
research
03/19/2020

Robust Deep Reinforcement Learning against Adversarial Perturbations on Observations

Deep Reinforcement Learning (DRL) is vulnerable to small adversarial per...
research
06/30/2021

Understanding Adversarial Attacks on Observations in Deep Reinforcement Learning

Recent works demonstrate that deep reinforcement learning (DRL) models a...
research
06/09/2021

Who Is the Strongest Enemy? Towards Optimal and Efficient Evasion Attacks in Deep RL

Evaluating the worst-case performance of a reinforcement learning (RL) a...
research
08/04/2020

Robust Reinforcement Learning using Adversarial Populations

Reinforcement Learning (RL) is an effective tool for controller design b...
research
02/14/2019

Active Perception in Adversarial Scenarios using Maximum Entropy Deep Reinforcement Learning

We pose an active perception problem where an autonomous agent actively ...
research
11/20/2022

Adversarial Cheap Talk

Adversarial attacks in reinforcement learning (RL) often assume highly-p...
research
09/05/2018

Reinforcement Learning under Threats

In several reinforcement learning (RL) scenarios, mainly in security set...

Please sign up or login with your details

Forgot password? Click here to reset