Look Before You Leap: Safe Model-Based Reinforcement Learning with Human Intervention

11/10/2021
by   Yunkun Xu, et al.
7

Safety has become one of the main challenges of applying deep reinforcement learning to real world systems. Currently, the incorporation of external knowledge such as human oversight is the only means to prevent the agent from visiting the catastrophic state. In this paper, we propose MBHI, a novel framework for safe model-based reinforcement learning, which ensures safety in the state-level and can effectively avoid both "local" and "non-local" catastrophes. An ensemble of supervised learners are trained in MBHI to imitate human blocking decisions. Similar to human decision-making process, MBHI will roll out an imagined trajectory in the dynamics model before executing actions to the environment, and estimate its safety. When the imagination encounters a catastrophe, MBHI will block the current action and use an efficient MPC method to output a safety policy. We evaluate our method on several safety tasks, and the results show that MBHI achieved better performance in terms of sample efficiency and number of catastrophes compared to the baselines.

READ FULL TEXT

page 2

page 3

page 4

page 5

page 10

page 11

page 13

page 16

research
03/22/2019

Improving Safety in Reinforcement Learning Using Model-Based Architectures and Human Intervention

Recent progress in AI and Reinforcement learning has shown great success...
research
12/28/2022

Don't do it: Safer Reinforcement Learning With Rule-based Guidance

During training, reinforcement learning systems interact with the world ...
research
01/29/2022

Bellman Meets Hawkes: Model-Based Reinforcement Learning via Temporal Point Processes

We consider a sequential decision making problem where the agent faces t...
research
02/24/2021

Deep Reinforcement Learning for Safe Landing Site Selection with Concurrent Consideration of Divert Maneuvers

This research proposes a new integrated framework for identifying safe l...
research
02/17/2022

Efficient Learning of Safe Driving Policy via Human-AI Copilot Optimization

Human intervention is an effective way to inject human knowledge into th...
research
12/21/2021

Do Androids Dream of Electric Fences? Safety-Aware Reinforcement Learning with Latent Shielding

The growing trend of fledgling reinforcement learning systems making the...
research
06/12/2020

SAMBA: Safe Model-Based Active Reinforcement Learning

In this paper, we propose SAMBA, a novel framework for safe reinforcemen...

Please sign up or login with your details

Forgot password? Click here to reset