A Call to Reflect on Evaluation Practices for Failure Detection in Image Classification

11/28/2022
by   Paul F. Jaeger, et al.
0

Reliable application of machine learning-based decision systems in the wild is one of the major challenges currently investigated by the field. A large portion of established approaches aims to detect erroneous predictions by means of assigning confidence scores. This confidence may be obtained by either quantifying the model's predictive uncertainty, learning explicit scoring functions, or assessing whether the input is in line with the training distribution. Curiously, while these approaches all state to address the same eventual goal of detecting failures of a classifier upon real-life application, they currently constitute largely separated research fields with individual evaluation protocols, which either exclude a substantial part of relevant methods or ignore large parts of relevant failure sources. In this work, we systematically reveal current pitfalls caused by these inconsistencies and derive requirements for a holistic and realistic evaluation of failure detection. To demonstrate the relevance of this unified perspective, we present a large-scale empirical study for the first time enabling benchmarking confidence scoring functions w.r.t all relevant methods and failure sources. The revelation of a simple softmax response baseline as the overall best performing method underlines the drastic shortcomings of current evaluation in the abundance of publicized research on confidence scoring. Code and trained models are at https://github.com/IML-DKFZ/fd-shifts.

READ FULL TEXT

page 7

page 27

page 28

page 29

page 30

page 31

page 32

page 34

research
07/27/2023

Understanding Silent Failures in Medical Image Classification

To ensure the reliable use of classification systems in medical applicat...
research
05/27/2022

Failure Detection in Medical Image Classification: A Reality Check and Benchmarking Testbed

Failure detection in automated image classification is a critical safegu...
research
03/06/2023

Rethinking Confidence Calibration for Failure Prediction

Reliable confidence estimation for the predictions is important in many ...
research
10/17/2022

Disentangling Confidence Score Distribution for Out-of-Domain Intent Detection with Energy-Based Learning

Detecting Out-of-Domain (OOD) or unknown intents from user queries is es...
research
07/27/2022

Towards Clear Expectations for Uncertainty Estimation

If Uncertainty Quantification (UQ) is crucial to achieve trustworthy Mac...
research
01/13/2021

Estimating and Evaluating Regression Predictive Uncertainty in Deep Object Detectors

Predictive uncertainty estimation is an essential next step for the reli...
research
11/26/2021

Data Invariants to Understand Unsupervised Out-of-Distribution Detection

Unsupervised out-of-distribution (U-OOD) detection has recently attracte...

Please sign up or login with your details

Forgot password? Click here to reset