-
FANNet: Formal Analysis of Noise Tolerance, Training Bias and Input Sensitivity in Neural Networks
With a constant improvement in the network architectures and training me...
read it
-
Outside the Box: Abstraction-Based Monitoring of Neural Networks
Neural networks have demonstrated unmatched performance in a range of cl...
read it
-
TamperNN: Efficient Tampering Detection of Deployed Neural Nets
Neural networks are powering the deployment of embedded devices and Inte...
read it
-
Increasing Trustworthiness of Deep Neural Networks via Accuracy Monitoring
Inference accuracy of deep neural networks (DNNs) is a crucial performan...
read it
-
Runtime Monitoring Neural Activation Patterns
For using neural networks in safety critical domains, it is important to...
read it
-
Exploratory Machine Learning with Unknown Unknowns
In conventional supervised learning, a training dataset is given with gr...
read it
-
No Subclass Left Behind: Fine-Grained Robustness in Coarse-Grained Classification Problems
In real-world classification tasks, each class often comprises multiple ...
read it
Into the unknown: Active monitoring of neural networks
Machine-learning techniques achieve excellent performance in modern applications. In particular, neural networks enable training classifiers, often used in safety-critical applications, to complete a variety of tasks without human supervision. Neural-network models have neither the means to identify what they do not know nor to interact with the human user before making a decision. When deployed in the real world, such models work reliably in scenarios they have seen during training. In unfamiliar situations, however, they can exhibit unpredictable behavior compromising safety of the whole system. We propose an algorithmic framework for active monitoring of neural-network classifiers that allows for their deployment in dynamic environments where unknown input classes appear frequently. Based on quantitative monitoring of the feature layer, we detect novel inputs and ask an authority for labels, thus enabling us to adapt to these novel classes. A neural network wrapped in our framework achieves higher classification accuracy on unknown input classes over time compared to the original standalone model. The typical approach to adapt to unknown input classes is to retrain the neural-network classifier on an augmented training dataset. However, the system is vulnerable before such a dataset is available. Owing to the underlying monitor, we adapt the framework to novel inputs incrementally, thereby improving short-term reliability of the classification.
READ FULL TEXT
Comments
There are no comments yet.