Prediction Confidence from Neighbors

03/31/2020
by   Mark Philip Philipsen, et al.
0

The inability of Machine Learning (ML) models to successfully extrapolate correct predictions from out-of-distribution (OoD) samples is a major hindrance to the application of ML in critical applications. Until the generalization ability of ML methods is improved it is necessary to keep humans in the loop. The need for human supervision can only be reduced if it is possible to determining a level of confidence in predictions, which can be used to either ask for human assistance or to abstain from making predictions. We show that feature space distance is a meaningful measure that can provide confidence in predictions. The distance between unseen samples and nearby training samples proves to be correlated to the prediction error of unseen samples. Depending on the acceptable degree of error, predictions can either be trusted or rejected based on the distance to training samples. can be used to decide whether a sample is worth adding to the training set. This enables earlier and safer deployment of models in critical applications and is vital for deploying models under ever-changing conditions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/11/2019

Towards Safe Machine Learning for CPS: Infer Uncertainty from Training Data

Machine learning (ML) techniques are increasingly applied to decision-ma...
research
06/28/2020

Modeling Generalization in Machine Learning: A Methodological and Computational Study

As machine learning becomes more and more available to the general publi...
research
10/25/2022

Useful Confidence Measures: Beyond the Max Score

An important component in deploying machine learning (ML) in safety-crit...
research
10/26/2021

Reliable and Trustworthy Machine Learning for Health Using Dataset Shift Detection

Unpredictable ML model behavior on unseen data, especially in the health...
research
05/14/2023

Automatic Generation of Attention Rules For Containment of Machine Learning Model Errors

Machine learning (ML) solutions are prevalent in many applications. Howe...
research
11/25/2022

TrustGAN: Training safe and trustworthy deep learning models through generative adversarial networks

Deep learning models have been developed for a variety of tasks and are ...
research
06/15/2021

Predicting Unreliable Predictions by Shattering a Neural Network

Piecewise linear neural networks can be split into subfunctions, each wi...

Please sign up or login with your details

Forgot password? Click here to reset