Locally Valid and Discriminative Confidence Intervals for Deep Learning Models

06/01/2021
by   Zhen Lin, et al.
0

Crucial for building trust in deep learning models for critical real-world applications is efficient and theoretically sound uncertainty quantification, a task that continues to be challenging. Useful uncertainty information is expected to have two key properties: It should be valid (guaranteeing coverage) and discriminative (more uncertain when the expected risk is high). Moreover, when combined with deep learning (DL) methods, it should be scalable and affect the DL model performance minimally. Most existing Bayesian methods lack frequentist coverage guarantees and usually affect model performance. The few available frequentist methods are rarely discriminative and/or violate coverage guarantees due to unrealistic assumptions. Moreover, many methods are expensive or require substantial modifications to the base neural network. Building upon recent advances in conformal prediction and leveraging the classical idea of kernel regression, we propose Locally Valid and Discriminative confidence intervals (LVD), a simple, efficient and lightweight method to construct discriminative confidence intervals (CIs) for almost any DL model. With no assumptions on the data distribution, such CIs also offer finite-sample local coverage guarantees (contrasted to the simpler marginal coverage). Using a diverse set of datasets, we empirically verify that besides being the only locally valid method, LVD also exceeds or matches the performance (including coverage rate and prediction accuracy) of existing uncertainty quantification methods, while offering additional benefits in scalability and flexibility.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/24/2020

AutoNCP: Automated pipelines for accurate confidence intervals

Successful application of machine learning models to real-world predicti...
research
06/09/2023

Efficient Uncertainty Quantification and Reduction for Over-Parameterized Neural Networks

Uncertainty quantification (UQ) is important for reliability assessment ...
research
07/15/2021

A Gentle Introduction to Conformal Prediction and Distribution-Free Uncertainty Quantification

Black-box machine learning learning methods are now routinely used in hi...
research
07/07/2021

MD-split+: Practical Local Conformal Inference in High Dimensions

Quantifying uncertainty in model predictions is a common goal for practi...
research
12/15/2020

Learning Prediction Intervals for Model Performance

Understanding model performance on unlabeled data is a fundamental chall...
research
07/05/2022

Improving Trustworthiness of AI Disease Severity Rating in Medical Imaging with Ordinal Conformal Prediction Sets

The regulatory approval and broad clinical deployment of medical AI have...
research
09/24/2018

Deep Confidence: A Computationally Efficient Framework for Calculating Reliable Errors for Deep Neural Networks

Deep learning architectures have proved versatile in a number of drug di...

Please sign up or login with your details

Forgot password? Click here to reset