Interpreting deep learning output for out-of-distribution detection

11/07/2022
by   Damian Matuszewski, et al.
0

Commonly used AI networks are very self-confident in their predictions, even when the evidence for a certain decision is dubious. The investigation of a deep learning model output is pivotal for understanding its decision processes and assessing its capabilities and limitations. By analyzing the distributions of raw network output vectors, it can be observed that each class has its own decision boundary and, thus, the same raw output value has different support for different classes. Inspired by this fact, we have developed a new method for out-of-distribution detection. The method offers an explanatory step beyond simple thresholding of the softmax output towards understanding and interpretation of the model learning process and its output. Instead of assigning the class label of the highest logit to each new sample presented to the network, it takes the distributions over all classes into consideration. A probability score interpreter (PSI) is created based on the joint logit values in relation to their respective correct vs wrong class distributions. The PSI suggests whether the sample is likely to belong to a specific class, whether the network is unsure, or whether the sample is likely an outlier or unknown type for the network. The simple PSI has the benefit of being applicable on already trained networks. The distributions for correct vs wrong class for each output node are established by simply running the training examples through the trained network. We demonstrate our OOD detection method on a challenging transmission electron microscopy virus image dataset. We simulate a real-world application in which images of virus types unknown to a trained virus classifier, yet acquired with the same procedures and instruments, constitute the OOD samples.

READ FULL TEXT

page 7

page 8

page 12

research
09/20/2018

Distribution Networks for Open Set Learning

In open set learning, a model must be able to generalize to novel classe...
research
04/02/2021

Multi-Class Data Description for Out-of-distribution Detection

The capability of reliably detecting out-of-distribution samples is one ...
research
04/17/2020

One-vs-Rest Network-based Deep Probability Model for Open Set Recognition

Unknown examples that are unseen during training often appear in real-wo...
research
12/13/2015

Deep Learning-Based Image Kernel for Inductive Transfer

We propose a method to classify images from target classes with a small ...
research
07/10/2021

Hack The Box: Fooling Deep Learning Abstraction-Based Monitors

Deep learning is a type of machine learning that adapts a deep hierarchy...
research
03/07/2023

Predicted Embedding Power Regression for Large-Scale Out-of-Distribution Detection

Out-of-distribution (OOD) inputs can compromise the performance and safe...
research
06/23/2022

Inductive Conformal Prediction: A Straightforward Introduction with Examples in Python

Inductive Conformal Prediction (ICP) is a set of distribution-free and m...

Please sign up or login with your details

Forgot password? Click here to reset