Conservative Prediction via Data-Driven Confidence Minimization

06/08/2023
by   Caroline Choi, et al.
0

Errors of machine learning models are costly, especially in safety-critical domains such as healthcare, where such mistakes can prevent the deployment of machine learning altogether. In these settings, conservative models – models which can defer to human judgment when they are likely to make an error – may offer a solution. However, detecting unusual or difficult examples is notably challenging, as it is impossible to anticipate all potential inputs at test time. To address this issue, prior work has proposed to minimize the model's confidence on an auxiliary pseudo-OOD dataset. We theoretically analyze the effect of confidence minimization and show that the choice of auxiliary dataset is critical. Specifically, if the auxiliary dataset includes samples from the OOD region of interest, confidence minimization provably separates ID and OOD inputs by predictive confidence. Taking inspiration from this result, we present data-driven confidence minimization (DCM), which minimizes confidence on an uncertainty dataset containing examples that the model is likely to misclassify at test time. Our experiments show that DCM consistently outperforms state-of-the-art OOD detection methods on 8 ID-OOD dataset pairs, reducing FPR (at TPR 95 outperforms existing selective classification approaches on 4 datasets in conditions of distribution shift.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/11/2020

Confidence Estimation via Auxiliary Models

Reliably quantifying the confidence of deep neural classifiers is a chal...
research
12/20/2022

Calibrating Deep Neural Networks using Explicit Regularisation and Dynamic Data Pruning

Deep neural networks (DNN) are prone to miscalibrated predictions, often...
research
02/15/2023

Uncertainty-Estimation with Normalized Logits for Out-of-Distribution Detection

Out-of-distribution (OOD) detection is critical for preventing deep lear...
research
07/28/2022

A Novel Data Augmentation Technique for Out-of-Distribution Sample Detection using Compounded Corruptions

Modern deep neural network models are known to erroneously classify out-...
research
03/05/2021

Limits of Probabilistic Safety Guarantees when Considering Human Uncertainty

When autonomous robots interact with humans, such as during autonomous d...
research
02/07/2022

Training OOD Detectors in their Natural Habitats

Out-of-distribution (OOD) detection is important for machine learning mo...
research
11/25/2022

TrustGAN: Training safe and trustworthy deep learning models through generative adversarial networks

Deep learning models have been developed for a variety of tasks and are ...

Please sign up or login with your details

Forgot password? Click here to reset