Enhancing the Robustness of Prior Network in Out-of-Distribution Detection

11/18/2018
by   Wenhu Chen, et al.
0

With the recent surge of interests in deep neural networks, more real-world applications start to adopt it in practice. However, deep neural networks are known to have limited control over its prediction under unseen images. Such weakness can potentially threaten society and cause annoying consequences in real-world scenarios. In order to resolve such issue, a popular task called out-of-distribution detection was proposed, which aims at separating out-of-distribution images from in-distribution images. In this paper, we propose a perturbed prior network architecture, which can efficiently separate model-level uncertainty from data-level uncertainty via prior entropy. To further enhance the robustness of proposed entropy-based uncertainty measure, we propose a concentration perturbation algorithm, which adaptively adds noise to concentration parameters so that the in- and out-of-distribution images are better separable. Our method can directly rely on the pre-trained deep neural network without re-training it, and also requires no knowledge about the network architecture and out-of-distribution examples. Such simplicity makes our method more suitable for real-world AI applications. Through comprehensive experiments, our methods demonstrate its superiority by achieving state-of-the-art results on many datasets.

READ FULL TEXT

page 6

page 8

research
09/04/2023

Uncertainty in AI: Evaluating Deep Neural Networks on Out-of-Distribution Images

As AI models are increasingly deployed in critical applications, ensurin...
research
08/24/2021

Out-of-Distribution Example Detection in Deep Neural Networks using Distance to Modelled Embedding

Adoption of deep learning in safety-critical systems raise the need for ...
research
05/28/2022

Deep Learning with Label Noise: A Hierarchical Approach

Deep neural networks are susceptible to label noise. Existing methods to...
research
11/18/2021

Perceiving and Modeling Density is All You Need for Image Dehazing

In the real world, the degradation of images taken under haze can be qui...
research
10/15/2019

Attention Network Robustification for Person ReID

The task of person re-identification (ReID) has attracted growing attent...
research
05/07/2021

Topological Uncertainty: Monitoring trained neural networks through persistence of activation graphs

Although neural networks are capable of reaching astonishing performance...

Please sign up or login with your details

Forgot password? Click here to reset