Justification-Based Reliability in Machine Learning

11/18/2019
by   Nurali Virani, et al.
0

With the advent of Deep Learning, the field of machine learning (ML) has surpassed human-level performance on diverse classification tasks. At the same time, there is a stark need to characterize and quantify reliability of a model's prediction on individual samples. This is especially true in application of such models in safety-critical domains of industrial control and healthcare. To address this need, we link the question of reliability of a model's individual prediction to the epistemic uncertainty of the model's prediction. More specifically, we extend the theory of Justified True Belief (JTB) in epistemology, created to study the validity and limits of human-acquired knowledge, towards characterizing the validity and limits of knowledge in supervised classifiers. We present an analysis of neural network classifiers linking the reliability of its prediction on an input to characteristics of the support gathered from the input and latent spaces of the network. We hypothesize that the JTB analysis exposes the epistemic uncertainty (or ignorance) of a model with respect to its inference, thereby allowing for the inference to be only as strong as the justification permits. We explore various forms of support (for e.g., k-nearest neighbors (k-NN) and l_p-norm based) generated for an input, using the training data to construct a justification for the prediction with that input. Through experiments conducted on simulated and real datasets, we demonstrate that our approach can provide reliability for individual predictions and characterize regions where such reliability cannot be ascertained.

READ FULL TEXT
research
07/11/2023

Random-Set Convolutional Neural Network (RS-CNN) for Epistemic Deep Learning

Machine learning is increasingly deployed in safety-critical domains whe...
research
02/08/2022

If a Human Can See It, So Should Your System: Reliability Requirements for Machine Vision Components

Machine Vision Components (MVC) are becoming safety-critical. Assuring t...
research
10/06/2020

Real-time Uncertainty Decomposition for Online Learning Control

Safety-critical decisions based on machine learning models require a cle...
research
09/17/2020

A numerical approach for hybrid reliability analysis of structures under mixed uncertainties using the uncertainty theory

This paper presents a novel numerical method for the hybrid reliability ...
research
12/17/2018

An Improved Deep Belief Network Model for Road Safety Analyses

Crash prediction is a critical component of road safety analyses. A wide...
research
02/19/2020

Variational Encoder-based Reliable Classification

Machine learning models provide statistically impressive results which m...
research
02/17/2021

Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs

Interpretability methods aim to help users build trust in and understand...

Please sign up or login with your details

Forgot password? Click here to reset