DeepAI AI Chat
Log In Sign Up

Justification-Based Reliability in Machine Learning

by   Nurali Virani, et al.
General Electric

With the advent of Deep Learning, the field of machine learning (ML) has surpassed human-level performance on diverse classification tasks. At the same time, there is a stark need to characterize and quantify reliability of a model's prediction on individual samples. This is especially true in application of such models in safety-critical domains of industrial control and healthcare. To address this need, we link the question of reliability of a model's individual prediction to the epistemic uncertainty of the model's prediction. More specifically, we extend the theory of Justified True Belief (JTB) in epistemology, created to study the validity and limits of human-acquired knowledge, towards characterizing the validity and limits of knowledge in supervised classifiers. We present an analysis of neural network classifiers linking the reliability of its prediction on an input to characteristics of the support gathered from the input and latent spaces of the network. We hypothesize that the JTB analysis exposes the epistemic uncertainty (or ignorance) of a model with respect to its inference, thereby allowing for the inference to be only as strong as the justification permits. We explore various forms of support (for e.g., k-nearest neighbors (k-NN) and l_p-norm based) generated for an input, using the training data to construct a justification for the prediction with that input. Through experiments conducted on simulated and real datasets, we demonstrate that our approach can provide reliability for individual predictions and characterize regions where such reliability cannot be ascertained.


Exploring Uncertainty in Deep Learning for Construction of Prediction Intervals

Deep learning has achieved impressive performance on many tasks in recen...

If a Human Can See It, So Should Your System: Reliability Requirements for Machine Vision Components

Machine Vision Components (MVC) are becoming safety-critical. Assuring t...

An Improved Deep Belief Network Model for Road Safety Analyses

Crash prediction is a critical component of road safety analyses. A wide...

Real-time Uncertainty Decomposition for Online Learning Control

Safety-critical decisions based on machine learning models require a cle...

Variational Encoder-based Reliable Classification

Machine learning models provide statistically impressive results which m...

Quantifying Uncertainty with Probabilistic Machine Learning Modeling in Wireless Sensing

The application of machine learning (ML) techniques in wireless communic...

A Safety Framework for Critical Systems Utilising Deep Neural Networks

Increasingly sophisticated mathematical modelling processes from Machine...