Online Shielding for Reinforcement Learning

12/04/2022
by   Bettina Könighofer, et al.
0

Besides the recent impressive results on reinforcement learning (RL), safety is still one of the major research challenges in RL. RL is a machine-learning approach to determine near-optimal policies in Markov decision processes (MDPs). In this paper, we consider the setting where the safety-relevant fragment of the MDP together with a temporal logic safety specification is given and many safety violations can be avoided by planning ahead a short time into the future. We propose an approach for online safety shielding of RL agents. During runtime, the shield analyses the safety of each available action. For any action, the shield computes the maximal probability to not violate the safety specification within the next k steps when executing this action. Based on this probability and a given threshold, the shield decides whether to block an action from the agent. Existing offline shielding approaches compute exhaustively the safety of all state-action combinations ahead of time, resulting in huge computation times and large memory consumption. The intuition behind online shielding is to compute at runtime the set of all states that could be reached in the near future. For each of these states, the safety of all available actions is analysed and used for shielding as soon as one of the considered states is reached. Our approach is well suited for high-level planning problems where the time between decisions can be used for safety computations and it is sustainable for the agent to wait until these computations are finished. For our evaluation, we selected a 2-player version of the classical computer game SNAKE. The game represents a high-level planning problem that requires fast decisions and the multiplayer setting induces a large state space, which is computationally expensive to analyse exhaustively.

READ FULL TEXT

page 11

page 12

research
12/17/2020

Online Shielding for Stochastic Systems

In this paper, we propose a method to develop trustworthy reinforcement ...
research
12/04/2022

Automata Learning meets Shielding

Safety is still one of the major research challenges in reinforcement le...
research
10/26/2022

Provable Safe Reinforcement Learning with Binary Feedback

Safety is a crucial necessity in many applications of reinforcement lear...
research
02/14/2022

SAUTE RL: Almost Surely Safe Reinforcement Learning Using State Augmentation

Satisfying safety constraints almost surely (or with probability one) ca...
research
08/29/2019

Solving Discounted Stochastic Two-Player Games with Near-Optimal Time and Sample Complexity

In this paper, we settle the sampling complexity of solving discounted t...
research
02/08/2023

A Near-Optimal Algorithm for Safe Reinforcement Learning Under Instantaneous Hard Constraints

In many applications of Reinforcement Learning (RL), it is critically im...
research
07/16/2018

Shielded Decision-Making in MDPs

A prominent problem in artificial intelligence and machine learning is t...

Please sign up or login with your details

Forgot password? Click here to reset