Out-of-Distribution Example Detection in Deep Neural Networks using Distance to Modelled Embedding

08/24/2021
by   Rickard Sjögren, et al.
0

Adoption of deep learning in safety-critical systems raise the need for understanding what deep neural networks do not understand after models have been deployed. The behaviour of deep neural networks is undefined for so called out-of-distribution examples. That is, examples from another distribution than the training set. Several methodologies to detect out-of-distribution examples during prediction-time have been proposed, but these methodologies constrain either neural network architecture, how the neural network is trained, suffer from performance overhead, or assume that the nature of out-of-distribution examples are known a priori. We present Distance to Modelled Embedding (DIME) that we use to detect out-of-distribution examples during prediction time. By approximating the training set embedding into feature space as a linear hyperplane, we derive a simple, unsupervised, highly performant and computationally efficient method. DIME allows us to add prediction-time detection of out-of-distribution examples to neural network models without altering architecture or training while imposing minimal constraints on when it is applicable. In our experiments, we demonstrate that by using DIME as an add-on after training, we efficiently detect out-of-distribution examples during prediction and match state-of-the-art methods while being more versatile and introducing negligible computational overhead.

READ FULL TEXT

page 8

page 9

research
12/05/2019

Why Should we Combine Training and Post-Training Methods for Out-of-Distribution Detection?

Deep neural networks are known to achieve superior results in classifica...
research
11/18/2018

Enhancing the Robustness of Prior Network in Out-of-Distribution Detection

With the recent surge of interests in deep neural networks, more real-wo...
research
01/28/2021

Increasing the Confidence of Deep Neural Networks by Coverage Analysis

The great performance of machine learning algorithms and deep neural net...
research
01/09/2018

Less is More: Culling the Training Set to Improve Robustness of Deep Neural Networks

Deep neural networks are vulnerable to adversarial examples. Prior defen...
research
05/15/2023

Smoothness and monotonicity constraints for neural networks using ICEnet

Deep neural networks have become an important tool for use in actuarial ...
research
11/24/2022

Beyond Mahalanobis-Based Scores for Textual OOD Detection

Deep learning methods have boosted the adoption of NLP systems in real-l...
research
11/20/2020

Efficient Data-Dependent Learnability

The predictive normalized maximum likelihood (pNML) approach has recentl...

Please sign up or login with your details

Forgot password? Click here to reset