A framework for benchmarking class-out-of-distribution detection and its application to ImageNet

02/23/2023
by   Ido Galil, et al.
0

When deployed for risk-sensitive tasks, deep neural networks must be able to detect instances with labels from outside the distribution for which they were trained. In this paper we present a novel framework to benchmark the ability of image classifiers to detect class-out-of-distribution instances (i.e., instances whose true labels do not appear in the training distribution) at various levels of detection difficulty. We apply this technique to ImageNet, and benchmark 525 pretrained, publicly available, ImageNet-1k classifiers. The code for generating a benchmark for any ImageNet-1k classifier, along with the benchmarks prepared for the above-mentioned 525 models is available at https://github.com/mdabbah/COOD_benchmarking. The usefulness of the proposed framework and its advantage over alternative existing benchmarks is demonstrated by analyzing the results obtained for these models, which reveals numerous novel observations including: (1) knowledge distillation consistently improves class-out-of-distribution (C-OOD) detection performance; (2) a subset of ViTs performs better C-OOD detection than any other model; (3) the language–vision CLIP model achieves good zero-shot detection performance, with its best instance outperforming 96 models evaluated; (4) accuracy and in-distribution ranking are positively correlated to C-OOD detection; and (5) we compare various confidence functions for C-OOD detection. Our companion paper, also published in ICLR 2023 (What Can We Learn From The Selective Prediction And Uncertainty Estimation Performance Of 523 Imagenet Classifiers), examines the uncertainty estimation performance (ranking, calibration, and selective prediction performance) of these classifiers in an in-distribution setting.

READ FULL TEXT

page 2

page 5

page 6

page 8

page 17

page 18

page 19

research
02/23/2023

What Can We Learn From The Selective Prediction And Uncertainty Estimation Performance Of 523 Imagenet Classifiers

When deployed for risk-sensitive tasks, deep neural networks must includ...
research
06/05/2022

Which models are innately best at uncertainty estimation?

Deep neural networks must be equipped with an uncertainty estimation mec...
research
11/15/2022

Heatmap-based Out-of-Distribution Detection

Our work investigates out-of-distribution (OOD) detection as a neural ne...
research
12/09/2022

Spurious Features Everywhere – Large-Scale Detection of Harmful Spurious Features in ImageNet

Benchmark performance of deep learning classifiers alone is not a reliab...
research
03/18/2021

Danish Fungi 2020 – Not Just Another Image Recognition Dataset

We introduce a novel fine-grained dataset and benchmark, the Danish Fung...
research
09/26/2022

Out-of-Distribution Detection with Hilbert-Schmidt Independence Optimization

Outlier detection tasks have been playing a critical role in AI safety. ...
research
06/12/2020

Are we done with ImageNet?

Yes, and no. We ask whether recent progress on the ImageNet classificati...

Please sign up or login with your details

Forgot password? Click here to reset