Understanding out-of-distribution accuracies through quantifying difficulty of test samples

03/28/2022
by   Berfin Simsek, et al.
0

Existing works show that although modern neural networks achieve remarkable generalization performance on the in-distribution (ID) dataset, the accuracy drops significantly on the out-of-distribution (OOD) datasets <cit.>. To understand why a variety of models consistently make more mistakes in the OOD datasets, we propose a new metric to quantify the difficulty of the test images (either ID or OOD) that depends on the interaction of the training dataset and the model. In particular, we introduce confusion score as a label-free measure of image difficulty which quantifies the amount of disagreement on a given test image based on the class conditional probabilities estimated by an ensemble of trained models. Using the confusion score, we investigate CIFAR-10 and its OOD derivatives. Next, by partitioning test and OOD datasets via their confusion scores, we predict the relationship between ID and OOD accuracies for various architectures. This allows us to obtain an estimator of the OOD accuracy of a given model only using ID test labels. Our observations indicate that the biggest contribution to the accuracy drop comes from images with high confusion scores. Upon further inspection, we report on the nature of the misclassified images grouped by their confusion scores: (i) images with high confusion scores contain weak spurious correlations that appear in multiple classes in the training data and lack clear class-specific features, and (ii) images with low confusion scores exhibit spurious correlations that belong to another class, namely class-specific spurious correlations.

READ FULL TEXT

page 2

page 4

page 9

page 14

page 16

page 17

page 18

research
06/19/2022

Supervision Adaptation Balances In-Distribution Generalization and Out-of-Distribution Detection

When there is a discrepancy between in-distribution (ID) samples and out...
research
11/17/2021

Understanding and Testing Generalization of Deep Networks on Out-of-Distribution Data

Deep network models perform excellently on In-Distribution (ID) data, bu...
research
07/18/2022

Calibrated ensembles can mitigate accuracy tradeoffs under distribution shift

We often see undesirable tradeoffs in robust machine learning where out-...
research
02/02/2023

Effective Robustness against Natural Distribution Shifts for Models with Different Training Data

“Effective robustness” measures the extra out-of-distribution (OOD) robu...
research
04/10/2023

Zero-Shot In-Distribution Detection in Multi-Object Settings Using Vision-Language Foundation Models

Removing out-of-distribution (OOD) images from noisy images scraped from...
research
07/17/2022

A Simple Test-Time Method for Out-of-Distribution Detection

Neural networks are known to produce over-confident predictions on input...
research
07/07/2022

A Study on the Predictability of Sample Learning Consistency

Curriculum Learning is a powerful training method that allows for faster...

Please sign up or login with your details

Forgot password? Click here to reset