Towards Safe Machine Learning for CPS: Infer Uncertainty from Training Data

09/11/2019
by   Xiaozhe Gu, et al.
0

Machine learning (ML) techniques are increasingly applied to decision-making and control problems in Cyber-Physical Systems among which many are safety-critical, e.g., chemical plants, robotics, autonomous vehicles. Despite the significant benefits brought by ML techniques, they also raise additional safety issues because 1) most expressive and powerful ML models are not transparent and behave as a black box and 2) the training data which plays a crucial role in ML safety is usually incomplete. An important technique to achieve safety for ML models is "Safe Fail", i.e., a model selects a reject option and applies the backup solution, a traditional controller or a human operator for example, when it has low confidence in a prediction. Data-driven models produced by ML algorithms learn from training data, and hence they are only as good as the examples they have learnt. As pointed in [17], ML models work well in the "training space" (i.e., feature space with sufficient training data), but they could not extrapolate beyond the training space. As observed in many previous studies, a feature space that lacks training data generally has a much higher error rate than the one that contains sufficient training samples [31]. Therefore, it is essential to identify the training space and avoid extrapolating beyond the training space. In this paper, we propose an efficient Feature Space Partitioning Tree (FSPT) to address this problem. Using experiments, we also show that, a strong relationship exists between model performance and FSPT score.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/09/2021

Practical Machine Learning Safety: A Survey and Primer

The open-world deployment of Machine Learning (ML) algorithms in safety-...
research
09/28/2018

Reuse and Adaptation for Entity Resolution through Transfer Learning

Entity resolution (ER) is one of the fundamental problems in data integr...
research
03/31/2020

Prediction Confidence from Neighbors

The inability of Machine Learning (ML) models to successfully extrapolat...
research
08/02/2019

A Visual Technique to Analyze Flow of Information in a Machine Learning System

Machine learning (ML) algorithms and machine learning based software sys...
research
10/05/2016

On the Safety of Machine Learning: Cyber-Physical Systems, Decision Sciences, and Data Products

Machine learning algorithms increasingly influence our decisions and int...
research
06/10/2023

Interpretable Differencing of Machine Learning Models

Understanding the differences between machine learning (ML) models is of...
research
03/03/2019

Towards a Framework to Manage Perceptual Uncertainty for Safe Automated Driving

Perception is a safety-critical function of autonomous vehicles and mach...

Please sign up or login with your details

Forgot password? Click here to reset