Improved Robustness to Open Set Inputs via Tempered Mixup

09/10/2020
by   Ryne Roady, et al.
10

Supervised classification methods often assume that evaluation data is drawn from the same distribution as training data and that all classes are present for training. However, real-world classifiers must handle inputs that are far from the training distribution including samples from unknown classes. Open set robustness refers to the ability to properly label samples from previously unseen categories as novel and avoid high-confidence, incorrect predictions. Existing approaches have focused on either novel inference methods, unique training architectures, or supplementing the training data with additional background samples. Here, we propose a simple regularization technique easily applied to existing convolutional neural network architectures that improves open set robustness without a background dataset. Our method achieves state-of-the-art results on open set classification baselines and easily scales to large-scale open set classification problems.

READ FULL TEXT
research
10/30/2019

Are Out-of-Distribution Detection Methods Effective on Large-Scale Datasets?

Supervised classification methods often assume the train and test data d...
research
10/13/2022

Large-Scale Open-Set Classification Protocols for ImageNet

Open-Set Classification (OSC) intends to adapt closed-set classification...
research
07/19/2022

Few-shot Open-set Recognition Using Background as Unknowns

Few-shot open-set recognition aims to classify both seen and novel image...
research
11/09/2018

Reducing Network Agnostophobia

Agnostophobia, the fear of the unknown, can be experienced by deep learn...
research
07/21/2022

Towards Accurate Open-Set Recognition via Background-Class Regularization

In open-set recognition (OSR), classifiers should be able to reject unkn...
research
07/06/2022

Two-stage Decision Improves Open-Set Panoptic Segmentation

Open-set panoptic segmentation (OPS) problem is a new research direction...
research
11/20/2019

Outside the Box: Abstraction-Based Monitoring of Neural Networks

Neural networks have demonstrated unmatched performance in a range of cl...

Please sign up or login with your details

Forgot password? Click here to reset