Safe Reinforcement Learning on Autonomous Vehicles

09/27/2019
by   David Isele, et al.
15

There have been numerous advances in reinforcement learning, but the typically unconstrained exploration of the learning process prevents the adoption of these methods in many safety critical applications. Recent work in safe reinforcement learning uses idealized models to achieve their guarantees, but these models do not easily accommodate the stochasticity or high-dimensionality of real world systems. We investigate how prediction provides a general and intuitive framework to constraint exploration, and show how it can be used to safely learn intersection handling behaviors on an autonomous vehicle.

READ FULL TEXT

page 1

page 2

page 5

research
02/24/2021

Towards Safe Continuing Task Reinforcement Learning

Safety is a critical feature of controller design for physical systems. ...
research
02/14/2019

Verifiably Safe Off-Model Reinforcement Learning

The desire to use reinforcement learning in safety-critical settings has...
research
09/28/2022

Guiding Safe Exploration with Weakest Preconditions

In reinforcement learning for safety-critical settings, it is often desi...
research
02/18/2019

Parenting: Safe Reinforcement Learning from Human Input

Autonomous agents trained via reinforcement learning present numerous sa...
research
12/14/2021

Cooperation for Scalable Supervision of Autonomy in Mixed Traffic

Improvements in autonomy offer the potential for positive outcomes in a ...
research
02/19/2022

Multi-task Safe Reinforcement Learning for Navigating Intersections in Dense Traffic

Multi-task intersection navigation including the unprotected turning lef...
research
09/30/2022

Safe Exploration Method for Reinforcement Learning under Existence of Disturbance

Recent rapid developments in reinforcement learning algorithms have been...

Please sign up or login with your details

Forgot password? Click here to reset