Lyapunov Barrier Policy Optimization

03/16/2021
by   Harshit Sikchi, et al.
0

Deploying Reinforcement Learning (RL) agents in the real-world require that the agents satisfy safety constraints. Current RL agents explore the environment without considering these constraints, which can lead to damage to the hardware or even other agents in the environment. We propose a new method, LBPO, that uses a Lyapunov-based barrier function to restrict the policy update to a safe set for each training iteration. Our method also allows the user to control the conservativeness of the agent with respect to the constraints in the environment. LBPO significantly outperforms state-of-the-art baselines in terms of the number of constraint violations during training while being competitive in terms of performance. Further, our analysis reveals that baselines like CPO and SDDPG rely mostly on backtracking to ensure safety rather than safe projection, which provides insight into why previous methods might not have effectively limit the number of constraint violations.

READ FULL TEXT

page 5

page 14

research
09/29/2022

Enforcing Hard Constraints with Soft Barriers: Safe Reinforcement Learning in Unknown Stochastic Environments

It is quite challenging to ensure the safety of reinforcement learning (...
research
04/24/2021

Constraint-Guided Reinforcement Learning: Augmenting the Agent-Environment-Interaction

Reinforcement Learning (RL) agents have great successes in solving tasks...
research
11/30/2022

Safe Model-Free Reinforcement Learning using Disturbance-Observer-Based Control Barrier Functions

Safe reinforcement learning (RL) with assured satisfaction of hard state...
research
12/06/2022

Safe Inverse Reinforcement Learning via Control Barrier Function

Learning from Demonstration (LfD) is a powerful method for enabling robo...
research
05/15/2017

Probabilistically Safe Policy Transfer

Although learning-based methods have great potential for robotics, one c...
research
06/20/2020

Accelerating Safe Reinforcement Learning with Constraint-mismatched Policies

We consider the problem of reinforcement learning when provided with a b...
research
01/07/2020

Blue River Controls: A toolkit for Reinforcement Learning Control Systems on Hardware

We provide a simple hardware wrapper around the Quanser's hardware-in-th...

Please sign up or login with your details

Forgot password? Click here to reset