Safe Model-Free Reinforcement Learning using Disturbance-Observer-Based Control Barrier Functions

11/30/2022
by   Yikun Cheng, et al.
0

Safe reinforcement learning (RL) with assured satisfaction of hard state constraints during training has recently received a lot of attention. Safety filters, e.g., based on control barrier functions (CBFs), provide a promising way for safe RL via modifying the unsafe actions of an RL agent on the fly. Existing safety filter-based approaches typically involve learning of uncertain dynamics and quantifying the learned model error, which leads to conservative filters before a large amount of data is collected to learn a good model, thereby preventing efficient exploration. This paper presents a method for safe and efficient model-free RL using disturbance observers (DOBs) and control barrier functions (CBFs). Unlike most existing safe RL methods that deal with hard state constraints, our method does not involve model learning, and leverages DOBs to accurately estimate the pointwise value of the uncertainty, which is then incorporated into a robust CBF condition to generate safe actions. The DOB-based CBF can be used as a safety filter with any model-free RL algorithms by minimally modifying the actions of an RL agent whenever necessary to ensure safety throughout the learning process. Simulation results on a unicycle and a 2D quadrotor demonstrate that the proposed method outperforms a state-of-the-art safe RL algorithm using CBFs and Gaussian processes-based model learning, in terms of safety violation rate, and sample and computational efficiency.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/11/2021

Safe Model-Based Reinforcement Learning Using Robust Control Barrier Functions

Reinforcement Learning (RL) is effective in many scenarios. However, it ...
research
03/21/2019

End-to-End Safe Reinforcement Learning through Barrier Functions for Safety-Critical Continuous Control Tasks

Reinforcement Learning (RL) algorithms have found limited success beyond...
research
10/03/2022

Probabilistic Safeguard for Reinforcement Learning Using Safety Index Guided Gaussian Process Models

Safety is one of the biggest concerns to applying reinforcement learning...
research
10/07/2019

A Learnable Safety Measure

Failures are challenging for learning to control physical systems since ...
research
03/16/2021

Lyapunov Barrier Policy Optimization

Deploying Reinforcement Learning (RL) agents in the real-world require t...
research
06/20/2022

Guided Safe Shooting: model based reinforcement learning with safety constraints

In the last decade, reinforcement learning successfully solved complex c...
research
05/31/2019

Extending Deep Model Predictive Control with Safety Augmented Value Estimation from Demonstrations

Reinforcement learning (RL) for robotics is challenging due to the diffi...

Please sign up or login with your details

Forgot password? Click here to reset