Safe Reinforcement Learning via Shielding for POMDPs

04/02/2022
by   Steven Carr, et al.
0

Reinforcement learning (RL) in safety-critical environments requires an agent to avoid decisions with catastrophic consequences. Various approaches addressing the safety of RL exist to mitigate this problem. In particular, so-called shields provide formal safety guarantees on the behavior of RL agents based on (partial) models of the agents' environment. Yet, the state-of-the-art generally assumes perfect sensing capabilities of the agents, which is unrealistic in real-life applications. The standard models to capture scenarios with limited sensing are partially observable Markov decision processes (POMDPs). Safe RL for these models remains an open problem so far. We propose and thoroughly evaluate a tight integration of formally-verified shields for POMDPs with state-of-the-art deep RL algorithms and create an efficacious method that safely learns policies under partial observability. We empirically demonstrate that an RL agent using a shield, beyond being safe, converges to higher values of expected reward. Moreover, shielded agents need an order of magnitude fewer training episodes than unshielded agents, especially in challenging sparse-reward settings.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/02/2021

An Abstraction-based Method to Check Multi-Agent Deep Reinforcement-Learning Behaviors

Multi-agent reinforcement learning (RL) often struggles to ensure the sa...
research
05/23/2023

GUARD: A Safe Reinforcement Learning Benchmark

Due to the trial-and-error nature, it is typically challenging to apply ...
research
12/17/2021

Distillation of RL Policies with Formal Guarantees via Variational Abstraction of Markov Decision Processes (Technical Report)

We consider the challenge of policy simplification and verification in t...
research
11/09/2020

Combining Propositional Logic Based Decision Diagrams with Decision Making in Urban Systems

Solving multiagent problems can be an uphill task due to uncertainty in ...
research
04/24/2021

Constraint-Guided Reinforcement Learning: Augmenting the Agent-Environment-Interaction

Reinforcement Learning (RL) agents have great successes in solving tasks...
research
06/26/2023

Experiments with Detecting and Mitigating AI Deception

How to detect and mitigate deceptive AI systems is an open problem for t...
research
01/10/2022

The Effects of Reward Misspecification: Mapping and Mitigating Misaligned Models

Reward hacking – where RL agents exploit gaps in misspecified reward fun...

Please sign up or login with your details

Forgot password? Click here to reset