Lyapunov-based uncertainty-aware safe reinforcement learning

07/29/2021
by   Ashkan B. Jeddi, et al.
22

Reinforcement learning (RL) has shown a promising performance in learning optimal policies for a variety of sequential decision-making tasks. However, in many real-world RL problems, besides optimizing the main objectives, the agent is expected to satisfy a certain level of safety (e.g., avoiding collisions in autonomous driving). While RL problems are commonly formalized as Markov decision processes (MDPs), safety constraints are incorporated via constrained Markov decision processes (CMDPs). Although recent advances in safe RL have enabled learning safe policies in CMDPs, these safety requirements should be satisfied during both training and in the deployment process. Furthermore, it is shown that in memory-based and partially observable environments, these methods fail to maintain safety over unseen out-of-distribution observations. To address these limitations, we propose a Lyapunov-based uncertainty-aware safe RL model. The introduced model adopts a Lyapunov function that converts trajectory-based constraints to a set of local linear constraints. Furthermore, to ensure the safety of the agent in highly uncertain environments, an uncertainty quantification method is developed that enables identifying risk-averse actions through estimating the probability of constraint violations. Moreover, a Transformers model is integrated to provide the agent with memory to process long time horizons of information via the self-attention mechanism. The proposed model is evaluated in grid-world navigation tasks where safety is defined as avoiding static and dynamic obstacles in fully and partially observable environments. The results of these experiments show a significant improvement in the performance of the agent both in achieving optimality and satisfying safety constraints.

READ FULL TEXT

page 1

page 9

page 10

research
05/20/2018

A Lyapunov-based Approach to Safe Reinforcement Learning

In many real-world reinforcement learning (RL) problems, besides optimiz...
research
09/19/2023

Safe POMDP Online Planning via Shielding

Partially observable Markov decision processes (POMDPs) have been widely...
research
07/16/2023

POMDP inference and robust solution via deep reinforcement learning: An application to railway optimal maintenance

Partially Observable Markov Decision Processes (POMDPs) can model comple...
research
10/02/2022

Safe Reinforcement Learning From Pixels Using a Stochastic Latent Representation

We address the problem of safe reinforcement learning from pixel observa...
research
11/14/2021

Explicit Explore, Exploit, or Escape (E^4): near-optimal safety-constrained reinforcement learning in polynomial time

In reinforcement learning (RL), an agent must explore an initially unkno...
research
02/21/2023

Conditioning Hierarchical Reinforcement Learning on Flexible Constraints

Safety in goal directed Reinforcement Learning (RL) settings has typical...
research
10/08/2020

Adaptive Shielding under Uncertainty

This paper targets control problems that exhibit specific safety and per...

Please sign up or login with your details

Forgot password? Click here to reset