-
Safe Reinforcement Learning on Autonomous Vehicles
There have been numerous advances in reinforcement learning, but the typ...
read it
-
Context-Aware Safe Reinforcement Learning for Non-Stationary Environments
Safety is a critical concern when deploying reinforcement learning agent...
read it
-
Probabilistic Guarantees for Safe Deep Reinforcement Learning
Deep reinforcement learning has been successfully applied to many contro...
read it
-
It's Time to Play Safe: Shield Synthesis for Timed Systems
Erroneous behaviour in safety critical real-time systems may inflict ser...
read it
-
Case Study: Verifying the Safety of an Autonomous Racing Car with a Neural Network Controller
This paper describes a verification case study on an autonomous racing c...
read it
-
Safe Reinforcement Learning with Stability Safety Guarantees Using Robust MPC
Reinforcement Learning offers tools to optimize policies based on the da...
read it
-
An Inductive Synthesis Framework for Verifiable Reinforcement Learning
Despite the tremendous advances that have been made in the last decade o...
read it
Verifiably Safe Off-Model Reinforcement Learning
The desire to use reinforcement learning in safety-critical settings has inspired a recent interest in formal methods for learning algorithms. Existing formal methods for learning and optimization primarily consider the problem of constrained learning or constrained optimization. Given a single correct model and associated safety constraint, these approaches guarantee efficient learning while provably avoiding behaviors outside the safety constraint. Acting well given an accurate environmental model is an important pre-requisite for safe learning, but is ultimately insufficient for systems that operate in complex heterogeneous environments. This paper introduces verification-preserving model updates, the first approach toward obtaining formal safety guarantees for reinforcement learning in settings where multiple environmental models must be taken into account. Through a combination of design-time model updates and runtime model falsification, we provide a first approach toward obtaining formal safety proofs for autonomous systems acting in heterogeneous environments.
READ FULL TEXT
Comments
There are no comments yet.