Detecting unusual input to neural networks

06/15/2020
by   Jörg Martin, et al.
0

Evaluating a neural network on an input that differs markedly from the training data might cause erratic and flawed predictions. We study a method that judges the unusualness of an input by evaluating its informative content compared to the learned parameters. This technique can be used to judge whether a network is suitable for processing a certain input and to raise a red flag that unexpected behavior might lie ahead. We compare our approach to various methods for uncertainty evaluation from the literature for various datasets and scenarios. Specifically, we introduce a simple, effective method that allows to directly compare the output of such metrics for single input points even if these metrics live on different scales.

READ FULL TEXT
research
05/18/2023

Evaluation Metrics for CNNs Compression

There is a lot of research effort devoted by researcher into developing ...
research
06/10/2019

Quantifying Layerwise Information Discarding of Neural Networks

This paper presents a method to explain how input information is discard...
research
03/26/2023

An Evaluation of Memory Optimization Methods for Training Neural Networks

As models continue to grow in size, the development of memory optimizati...
research
01/10/2022

Evaluation of Neural Networks Defenses and Attacks using NDCG and Reciprocal Rank Metrics

The problem of attacks on neural networks through input modification (i....
research
03/21/2019

Interpreting Neural Networks Using Flip Points

Neural networks have been criticized for their lack of easy interpretati...
research
09/07/2022

A simple approach for quantizing neural networks

In this short note, we propose a new method for quantizing the weights o...

Please sign up or login with your details

Forgot password? Click here to reset