DeepAI AI Chat
Log In Sign Up

Synthesizing Safe Policies under Probabilistic Constraints with Reinforcement Learning and Bayesian Model Checking

05/08/2020
by   Lenz Belzner, et al.
0

In this paper we propose Policy Synthesis under probabilistic Constraints (PSyCo), a systematic engineering method for synthesizing safe policies under probabilistic constraints with reinforcement learning and Bayesian model checking. As an implementation of PSyCo we introduce Safe Neural Evolutionary Strategies (SNES). SNES leverages Bayesian model checking while learning to adjust the Lagrangian of a constrained optimization problem derived from a PSyCo specification. We empirically evaluate SNES' ability to synthesize feasible policies in settings with formal safety requirements.

READ FULL TEXT

page 1

page 2

page 3

page 4

02/02/2021

An Abstraction-based Method to Check Multi-Agent Deep Reinforcement-Learning Behaviors

Multi-agent reinforcement learning (RL) often struggles to ensure the sa...
09/15/2022

COOL-MC: A Comprehensive Tool for Reinforcement Learning and Model Checking

This paper presents COOL-MC, a tool that integrates state-of-the-art rei...
02/27/2014

Synthesis of Parametric Programs using Genetic Programming and Model Checking

Formal methods apply algorithms based on mathematical principles to enha...
10/28/2016

Probabilistic Model Checking for Complex Cognitive Tasks -- A case study in human-robot interaction

This paper proposes to use probabilistic model checking to synthesize op...
11/04/2021

Monotonic Safety for Scalable and Data-Efficient Probabilistic Safety Analysis

Autonomous systems with machine learning-based perception can exhibit un...
10/02/2022

Policy Gradients for Probabilistic Constrained Reinforcement Learning

This paper considers the problem of learning safe policies in the contex...
11/07/2022

Learning Probabilistic Temporal Safety Properties from Examples in Relational Domains

We propose a framework for learning a fragment of probabilistic computat...