Learning When to Say "I Don't Know"

09/11/2022
by   Nicholas Kashani Motlagh, et al.
0

We propose a new Reject Option Classification technique to identify and remove regions of uncertainty in the decision space for a given neural classifier and dataset. Such existing formulations employ a learned rejection (remove)/selection (keep) function and require either a known cost for rejecting examples or strong constraints on the accuracy or coverage of the selected examples. We consider an alternative formulation by instead analyzing the complementary reject region and employing a validation set to learn per-class softmax thresholds. The goal is to maximize the accuracy of the selected examples subject to a natural randomness allowance on the rejected examples (rejecting more incorrect than correct predictions). We provide results showing the benefits of the proposed method over naïvely thresholding calibrated/uncalibrated softmax scores with 2-D points, imagery, and text classification datasets using state-of-the-art pretrained models. Source code is available at https://github.com/osu-cvl/learning-idk.

READ FULL TEXT
research
07/12/2020

Visualizing Classification Structure in Deep Neural Networks

We propose a measure to compute class similarity in large-scale classifi...
research
07/05/2020

Pretrained Generalized Autoregressive Model with Adaptive Probabilistic Label Clusters for Extreme Multi-label Text Classification

Extreme multi-label text classification (XMTC) is a task for tagging a g...
research
10/25/2022

Revisiting Softmax for Uncertainty Approximation in Text Classification

Uncertainty approximation in text classification is an important area wi...
research
08/29/2022

Reducing Certified Regression to Certified Classification

Adversarial training instances can severely distort a model's behavior. ...
research
05/26/2021

Predict then Interpolate: A Simple Algorithm to Learn Stable Classifiers

We propose Predict then Interpolate (PI), a simple algorithm for learnin...
research
04/08/2020

Pruning and Sparsemax Methods for Hierarchical Attention Networks

This paper introduces and evaluates two novel Hierarchical Attention Net...

Please sign up or login with your details

Forgot password? Click here to reset