Dependable Neural Networks for Safety Critical Tasks

12/20/2019
by   Molly O'Brien, et al.
12

Neural Networks are being integrated into safety critical systems, e.g., perception systems for autonomous vehicles, which require trained networks to perform safely in novel scenarios. It is challenging to verify neural networks because their decisions are not explainable, they cannot be exhaustively tested, and finite test samples cannot capture the variation across all operating conditions. Existing work seeks to train models robust to new scenarios via domain adaptation, style transfer, or few-shot learning. But these techniques fail to predict how a trained model will perform when the operating conditions differ from the testing conditions. We propose a metric, Machine Learning (ML) Dependability, that measures the network's probability of success in specified operating conditions which need not be the testing conditions. In addition, we propose the metrics Task Undependability and Harmful Undependability to distinguish network failures by their consequences. We evaluate the performance of a Neural Network agent trained using Reinforcement Learning in a simulated robot manipulation task. Our results demonstrate that we can accurately predict the ML Dependability, Task Undependability, and Harmful Undependability for operating conditions that are significantly different from the testing conditions. Finally, we design a Safety Function, using harmful failures identified during testing, that reduces harmful failures, in one example, by a factor of 700 while maintaining a high probability of success.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/17/2021

Network Generalization Prediction for Safety Critical Tasks in Novel Operating Domains

It is well known that Neural Network (network) performance often degrade...
research
06/30/2021

Embedded out-of-distribution detection on an autonomous robot platform

Machine learning (ML) is actively finding its way into modern cyber-phys...
research
08/25/2023

Generating and Explaining Corner Cases Using Learnt Probabilistic Lane Graphs

Validating the safety of Autonomous Vehicles (AVs) operating in open-end...
research
11/24/2020

Discovering Avoidable Planner Failures of Autonomous Vehicles using Counterfactual Analysis in Behaviorally Diverse Simulation

Automated Vehicles require exhaustive testing in simulation to detect as...
research
05/14/2020

Formal Analysis and Redesign of a Neural Network-Based Aircraft Taxiing System with VerifAI

We demonstrate a unified approach to rigorous design of safety-critical ...
research
07/24/2023

Formal description of ML models for unambiguous implementation

Implementing deep neural networks in safety critical systems, in particu...
research
06/06/2018

Towards Dependability Metrics for Neural Networks

Neural networks and other data engineered models are instrumental in dev...

Please sign up or login with your details

Forgot password? Click here to reset