Into the unknown: Active monitoring of neural networks

09/14/2020
by   Anna Lukina, et al.
0

Machine-learning techniques achieve excellent performance in modern applications. In particular, neural networks enable training classifiers, often used in safety-critical applications, to complete a variety of tasks without human supervision. Neural-network models have neither the means to identify what they do not know nor to interact with the human user before making a decision. When deployed in the real world, such models work reliably in scenarios they have seen during training. In unfamiliar situations, however, they can exhibit unpredictable behavior compromising safety of the whole system. We propose an algorithmic framework for active monitoring of neural-network classifiers that allows for their deployment in dynamic environments where unknown input classes appear frequently. Based on quantitative monitoring of the feature layer, we detect novel inputs and ask an authority for labels, thus enabling us to adapt to these novel classes. A neural network wrapped in our framework achieves higher classification accuracy on unknown input classes over time compared to the original standalone model. The typical approach to adapt to unknown input classes is to retrain the neural-network classifier on an augmented training dataset. However, the system is vulnerable before such a dataset is available. Owing to the underlying monitor, we adapt the framework to novel inputs incrementally, thereby improving short-term reliability of the classification.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/03/2019

FANNet: Formal Analysis of Noise Tolerance, Training Bias and Input Sensitivity in Neural Networks

With a constant improvement in the network architectures and training me...
research
11/20/2019

Outside the Box: Abstraction-Based Monitoring of Neural Networks

Neural networks have demonstrated unmatched performance in a range of cl...
research
10/21/2021

RoMA: a Method for Neural Network Robustness Measurement and Assessment

Neural network models have become the leading solution for a large varie...
research
07/03/2020

Increasing Trustworthiness of Deep Neural Networks via Accuracy Monitoring

Inference accuracy of deep neural networks (DNNs) is a crucial performan...
research
05/21/2022

Gradient Concealment: Free Lunch for Defending Adversarial Attacks

Recent studies show that the deep neural networks (DNNs) have achieved g...
research
02/05/2021

Interpretable Neural Networks based classifiers for categorical inputs

Because of the pervasive usage of Neural Networks in human sensitive app...
research
09/18/2018

Runtime Monitoring Neural Activation Patterns

For using neural networks in safety critical domains, it is important to...

Please sign up or login with your details

Forgot password? Click here to reset