DeepAI AI Chat
Log In Sign Up

Certifying Safety in Reinforcement Learning under Adversarial Perturbation Attacks

12/28/2022
by   Junlin Wu, et al.
Washington University in St Louis
0

Function approximation has enabled remarkable advances in applying reinforcement learning (RL) techniques in environments with high-dimensional inputs, such as images, in an end-to-end fashion, mapping such inputs directly to low-level control. Nevertheless, these have proved vulnerable to small adversarial input perturbations. A number of approaches for improving or certifying robustness of end-to-end RL to adversarial perturbations have emerged as a result, focusing on cumulative reward. However, what is often at stake in adversarial scenarios is the violation of fundamental properties, such as safety, rather than the overall reward that combines safety with efficiency. Moreover, properties such as safety can only be defined with respect to true state, rather than the high-dimensional raw inputs to end-to-end policies. To disentangle nominal efficiency and adversarial safety, we situate RL in deterministic partially-observable Markov decision processes (POMDPs) with the goal of maximizing cumulative reward subject to safety constraints. We then propose a partially-supervised reinforcement learning (PSRL) framework that takes advantage of an additional assumption that the true state of the POMDP is known at training time. We present the first approach for certifying safety of PSRL policies under adversarial input perturbations, and two adversarial training approaches that make direct use of PSRL. Our experiments demonstrate both the efficacy of the proposed approach for certifying safety in adversarial environments, and the value of the PSRL framework coupled with adversarial training in improving certified safety while preserving high nominal reward and high-quality predictions of true state.

READ FULL TEXT

page 1

page 2

page 3

page 4

02/04/2023

Certified Robust Control under Adversarial Perturbations

Autonomous systems increasingly rely on machine learning techniques to t...
05/29/2022

On the Robustness of Safe Reinforcement Learning under Observational Perturbations

Safe reinforcement learning (RL) trains a policy to maximize the task re...
06/17/2021

CROP: Certifying Robust Policies for Reinforcement Learning through Functional Smoothing

We present the first framework of Certifying Robust Policies for reinfor...
07/29/2021

Lyapunov-based uncertainty-aware safe reinforcement learning

Reinforcement learning (RL) has shown a promising performance in learnin...
10/02/2022

Safe Reinforcement Learning From Pixels Using a Stochastic Latent Representation

We address the problem of safe reinforcement learning from pixel observa...
02/21/2023

Conditioning Hierarchical Reinforcement Learning on Flexible Constraints

Safety in goal directed Reinforcement Learning (RL) settings has typical...