Limits of Probabilistic Safety Guarantees when Considering Human Uncertainty

03/05/2021
by   Richard Cheng, et al.
0

When autonomous robots interact with humans, such as during autonomous driving, explicit safety guarantees are crucial in order to avoid potentially life-threatening accidents. Many data-driven methods have explored learning probabilistic bounds over human agents' trajectories (i.e. confidence tubes that contain trajectories with probability δ), which can then be used to guarantee safety with probability 1-δ. However, almost all existing works consider δ≥ 0.001. The purpose of this paper is to argue that (1) in safety-critical applications, it is necessary to provide safety guarantees with δ < 10^-8, and (2) current learning-based methods are ill-equipped to compute accurate confidence bounds at such low δ. Using human driving data (from the highD dataset), as well as synthetically generated data, we show that current uncertainty models use inaccurate distributional assumptions to describe human behavior and/or require infeasible amounts of data to accurately learn confidence bounds for δ≤ 10^-8. These two issues result in unreliable confidence bounds, which can have dangerous implications if deployed on safety-critical systems.

READ FULL TEXT

page 3

page 4

research
08/08/2020

Learning-Based Safety-Stability-Driven Control for Safety-Critical Systems under Model Uncertainties

Safety and tracking stability are crucial for safety-critical systems su...
research
03/22/2018

Learning-based Model Predictive Control for Safe Exploration and Reinforcement Learning

Learning-based methods have been successful in solving complex control t...
research
12/14/2021

Cooperation for Scalable Supervision of Autonomy in Mixed Traffic

Improvements in autonomy offer the potential for positive outcomes in a ...
research
06/14/2022

Architectural patterns for handling runtime uncertainty of data-driven models in safety-critical perception

Data-driven models (DDM) based on machine learning and other AI techniqu...
research
09/12/2020

Positive Trust Balance for Self-Driving Car Deployment

The crucial decision about when self-driving cars are ready to deploy is...
research
06/08/2023

Conservative Prediction via Data-Driven Confidence Minimization

Errors of machine learning models are costly, especially in safety-criti...
research
07/05/2023

Safety Shielding under Delayed Observation

Agents operating in physical environments need to be able to handle dela...

Please sign up or login with your details

Forgot password? Click here to reset