DeepAI AI Chat
Log In Sign Up

Norm-Scaling for Out-of-Distribution Detection

by   Deepak Ravikumar, et al.
Purdue University

Out-of-Distribution (OoD) inputs are examples that do not belong to the true underlying distribution of the dataset. Research has shown that deep neural nets make confident mispredictions on OoD inputs. Therefore, it is critical to identify OoD inputs for safe and reliable deployment of deep neural nets. Often a threshold is applied on a similarity score to detect OoD inputs. One such similarity is angular similarity which is the dot product of latent representation with the mean class representation. Angular similarity encodes uncertainty, for example, if the angular similarity is less, it is less certain that the input belongs to that class. However, we observe that, different classes have different distributions of angular similarity. Therefore, applying a single threshold for all classes is not ideal since the same similarity score represents different uncertainties for different classes. In this paper, we propose norm-scaling which normalizes the logits separately for each class. This ensures that a single value consistently represents similar uncertainty for various classes. We show that norm-scaling, when used with maximum softmax probability detector, achieves 9.78 AUPR and 33.19 methods.


page 1

page 2

page 3

page 4


Leveraging Class Similarity to Improve Deep Neural Network Robustness

Traditionally artificial neural networks (ANNs) are trained by minimizin...

Detecting Out-of-Distribution Inputs in Deep Neural Networks Using an Early-Layer Output

Deep neural networks achieve superior performance in challenging tasks s...

Understanding and Improving Proximity Graph based Maximum Inner Product Search

The inner-product navigable small world graph (ip-NSW) represents the st...

Improving Adversarial Robustness with Hypersphere Embedding and Angular-based Regularizations

Adversarial training (AT) methods have been found to be effective agains...

Towards Maximizing the Representation Gap between In-Domain & Out-of-Distribution Examples

Among existing uncertainty estimation approaches, Dirichlet Prior Networ...

On-manifold Adversarial Data Augmentation Improves Uncertainty Calibration

Uncertainty estimates help to identify ambiguous, novel, or anomalous in...

Adversary Detection in Neural Networks via Persistent Homology

We outline a detection method for adversarial inputs to deep neural netw...