Large-Scale Open-Set Classification Protocols for ImageNet

Open-Set Classification (OSC) intends to adapt closed-set classification models to real-world scenarios, where the classifier must correctly label samples of known classes while rejecting previously unseen unknown samples. Only recently, research started to investigate on algorithms that are able to handle these unknown samples correctly. Some of these approaches address OSC by including into the training set negative samples that a classifier learns to reject, expecting that these data increase the robustness of the classifier on unknown classes. Most of these approaches are evaluated on small-scale and low-resolution image datasets like MNIST, SVHN or CIFAR, which makes it difficult to assess their applicability to the real world, and to compare them among each other. We propose three open-set protocols that provide rich datasets of natural images with different levels of similarity between known and unknown classes. The protocols consist of subsets of ImageNet classes selected to provide training and testing data closer to real-world scenarios. Additionally, we propose a new validation metric that can be employed to assess whether the training of deep learning models addresses both the classification of known samples and the rejection of unknown samples. We use the protocols to compare the performance of two baseline open-set algorithms to the standard SoftMax baseline and find that the algorithms work well on negative samples that have been seen during training, and partially on out-of-distribution detection tasks, but drop performance in the presence of samples from previously unseen unknown classes.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/09/2018

Reducing Network Agnostophobia

Agnostophobia, the fear of the unknown, can be experienced by deep learn...
research
08/17/2020

Deep Learning Based Open Set Acoustic Scene Classification

In this work, we compare the performance of three selected techniques in...
research
03/09/2023

R-Tuning: Regularized Prompt Tuning in Open-Set Scenarios

In realistic open-set scenarios where labels of a part of testing data a...
research
09/10/2020

Improved Robustness to Open Set Inputs via Tempered Mixup

Supervised classification methods often assume that evaluation data is d...
research
03/01/2022

Addressing Randomness in Evaluation Protocols for Out-of-Distribution Detection

Deep Neural Networks for classification behave unpredictably when confro...
research
08/08/2023

Comprehensive Assessment of the Performance of Deep Learning Classifiers Reveals a Surprising Lack of Robustness

Reliable and robust evaluation methods are a necessary first step toward...
research
08/29/2018

Extreme Value Theory for Open Set Classification - GPD and GEV Classifiers

Classification tasks usually assume that all possible classes are presen...

Please sign up or login with your details

Forgot password? Click here to reset