Safe Policy Improvement in Constrained Markov Decision Processes

10/20/2022
by   Luigi Berducci, et al.
0

The automatic synthesis of a policy through reinforcement learning (RL) from a given set of formal requirements depends on the construction of a reward signal and consists of the iterative application of many policy-improvement steps. The synthesis algorithm has to balance target, safety, and comfort requirements in a single objective and to guarantee that the policy improvement does not increase the number of safety-requirements violations, especially for safety-critical applications. In this work, we present a solution to the synthesis problem by solving its two main challenges: reward-shaping from a set of formal requirements and safe policy update. For the former, we propose an automatic reward-shaping procedure, defining a scalar reward signal compliant with the task specification. For the latter, we introduce an algorithm ensuring that the policy is improved in a safe fashion with high-confidence guarantees. We also discuss the adoption of a model-based RL algorithm to efficiently use the collected data and train a model-free agent on the predicted trajectories, where the safety violation does not have the same impact as in the real world. Finally, we demonstrate in standard control benchmarks that the resulting learning procedure is effective and robust even under heavy perturbations of the hyperparameters.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/26/2023

Reinforcement Learning by Guided Safe Exploration

Safety is critical to broadening the application of reinforcement learni...
research
10/14/2022

Model-based Safe Deep Reinforcement Learning via a Constrained Proximal Policy Optimization Algorithm

During initial iterations of training in most Reinforcement Learning (RL...
research
03/26/2021

Model-Free Learning of Safe yet Effective Controllers

In this paper, we study the problem of learning safe control policies th...
research
08/15/2020

Safe Reinforcement Learning in Constrained Markov Decision Processes

Safe reinforcement learning has been a promising approach for optimizing...
research
05/31/2021

Multi-Objective SPIBB: Seldonian Offline Policy Improvement with Safety Constraints in Finite MDPs

We study the problem of Safe Policy Improvement (SPI) under constraints ...
research
01/24/2022

Constrained Policy Optimization via Bayesian World Models

Improving sample-efficiency and safety are crucial challenges when deplo...
research
11/08/2021

On Assessing The Safety of Reinforcement Learning algorithms Using Formal Methods

The increasing adoption of Reinforcement Learning in safety-critical sys...

Please sign up or login with your details

Forgot password? Click here to reset