DeepAI AI Chat
Log In Sign Up

Safe Model-Based Reinforcement Learning with an Uncertainty-Aware Reachability Certificate

by   Dongjie Yu, et al.

Safe reinforcement learning (RL) that solves constraint-satisfactory policies provides a promising way to the broader safety-critical applications of RL in real-world problems such as robotics. Among all safe RL approaches, model-based methods reduce training time violations further due to their high sample efficiency. However, lacking safety robustness against the model uncertainties remains an issue in safe model-based RL, especially in training time safety. In this paper, we propose a distributional reachability certificate (DRC) and its Bellman equation to address model uncertainties and characterize robust persistently safe states. Furthermore, we build a safe RL framework to resolve constraints required by the DRC and its corresponding shield policy. We also devise a line search method to maintain safety and reach higher returns simultaneously while leveraging the shield policy. Comprehensive experiments on classical benchmarks such as constrained tracking and navigation indicate that the proposed algorithm achieves comparable returns with much fewer constraint violations during training.


page 1

page 7

page 8

page 12


Safe Distributional Reinforcement Learning

Safety in reinforcement learning (RL) is a key property in both training...

Safe Continuous Control with Constrained Model-Based Policy Optimization

The applicability of reinforcement learning (RL) algorithms in real-worl...

Contingency-constrained economic dispatch with safe reinforcement learning

Future power systems will rely heavily on micro grids with a high share ...

Safe Reinforcement Learning with Contrastive Risk Prediction

As safety violations can lead to severe consequences in real-world robot...

Efficient Trust Region-Based Safe Reinforcement Learning with Low-Bias Distributional Actor-Critic

To apply reinforcement learning (RL) to real-world applications, agents ...

Learning Barrier Certificates: Towards Safe Reinforcement Learning with Zero Training-time Violations

Training-time safety violations have been a major concern when we deploy...

Code Repositories


Safe Model-based RL with Reachability Certificate

view repo