Runtime Monitoring Neural Activation Patterns

09/18/2018 ∙ by Chih-Hong Cheng, et al. ∙ fortiss 0

For using neural networks in safety critical domains, it is important to know if a decision made by a neural network is supported by prior similarities in training. We propose runtime neuron activation pattern monitoring - after the standard training process, one creates a monitor by feeding the training data to the network again in order to store the neuron activation patterns in abstract form. In operation, a classification decision over an input is further supplemented by examining if a pattern similar (measured by Hamming distance) to the generated pattern is contained in the monitor. If the monitor does not contain any pattern similar to the generated pattern, it raises a warning that the decision is not based on the training data. Our experiments show that, by adjusting the similarity-threshold for activation patterns, the monitors can report a significant portion of misclassfications to be not supported by training with a small false-positive rate, when evaluated on a test set.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

Code Repositories

nn-dependability-kit

Toolbox for software dependability engineering of artificial neural networks


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

For highly automated driving, neural networks are the de facto option for vision-based perception. Nevertheless, one fundamental challenge for using neural networks in such a safety-critical application is to understand if a trained neural network performs inference “outside its comfort zone”. This appears when the network needs to significantly extrapolate from what it learns (or remembers) from the training data, as similar data has not appeared in the training process.

In this paper, we address this problem by runtime monitoring neuron activation patterns, where the underlying workflow is illustrated in Figure 1. After completing the training process, one records the neuron activation patterns for close-to-output neural network layers for all correctly predicted data used in the training process. Neurons in close-to-output layers in general represent high-level features, as demonstrated by recent approaches in interpreting neural networks [12]

. As state-of-the-art neural networks commonly use ReLU or its variations as activation function, we select the ReLU

on-off activation pattern to record the presence or absence of high-level features. At the same time, on-off patterns allow efficient storage using binary decision diagrams (BDDs) [1]. In operation, a classification decision is supplemented with a BDD-based monitor to detect whether the provided input has triggered an unseen neuron activation pattern - whenever an unseen activation pattern appears, the decision made by the neural network is considered to be less reliable. For the example in Figure 1

-(b), the scooter is classified as a car, but as its neuron activation pattern is not among the existing patterns created from the training data, the monitor reports that the decision made by the neural network can be problematic. The frequent appearance of unseen patterns provides an indicator of data distribution shift to the development team; such information is helpful as it may indicate that a neural network deployed on an autonomous vehicle needs to be updated.

Fig. 1: High-level workflow on runtime monitoring neuron activation patterns.

Nevertheless, for such an approach to be useful, we encounter technical difficulties where in the created monitor, the coarseness of abstraction should be abstract enough, but not too abstract. An illustration can be found in Figure 2, where given  to be all visited states from the training data, an abstraction such as  allows nearly no generalization effect, making all encountered data in operation time to be “not visited”. On the other hand, an abstraction such as  is too coarse in that every observed pattern in operation time is identified to be “visited”; such a monitor is also not useful. Overall, we have applied the following techniques to control the coarseness of abstraction.

(Enlarge the abstraction)

Apart from merely including visited patterns, we further develop technologies to enlarge the pattern space by considering all neuron activation patterns whose Hamming distance with existing patterns are within a certain threshold. It can also be efficiently implemented using existential quantification as commonly seen in many BDD software packages. Adding additional patterns does not influence performance - the membership query during runtime remains in the worst case in time linear to the number of neurons in the monitored layer (due to the use of BDDs). In addition, we apply gradient-based sensitivity analysis [9] to only monitor important neurons, thereby allowing unmonitored neurons to hold arbitrary values in the abstraction. This also overcomes the limitation where the maximum number of BDD variables one can use in practice is around hundreds.

(Infer when to stop enlarging)

To ensure that the abstraction is not too coarse, we take a validation set (which is expected to have the same distribution as in operation, but with ground-truth labels) and gradually increase the Hamming distance such that in the created region of abstraction, whenever the occurrence of out-of-pattern scenarios appears, it is also likely that misclassification appears. We applied this concept to decide the coarseness of abstraction for classifying standard image benchmarks such as MNIST [5] German Traffic Sign Recognition Benchmark (GTSRB) [10], as well as a vision-based front-car detector for automated highway piloting.

Fig. 2: Finding “just-right” abstraction for runtime monitors

The rest of the paper is structured as follows. Section II describes how to build neuron activation pattern monitors with the use of BDDs. Section III gives examples in terms of controlling the coarseness of abstraction. We summarize related work in Section IV and conclude in Section V with further research directions.

Ii Building Neuron Activation Pattern Monitors

We describe the underlying principles of our runtime monitoring approach for neural networks. For simplicity, the presented algorithm is for image classification, and we focus on runtime monitoring fully-connected neural network layers. Monitoring convolutional layers can be achieved by treating layers having convolutional filters as layers with fully connected neurons where missing connections are assigned with zero weights.

A neural network is comprised of layers where operationally, the -th layer for of the network is a function , with being the dimension of layer . Given an input , the output of the -th layer of the neural network is given by the functional composition of the -th layer and previous layers . For a neural network classifying categories . Given the computed output , the decision of classifying input in to a certain class is based on choosing the index  with the maximum value

among elements in the output vector, i.e.,

.

An important case in modern neural networks is the use of layers implementing Rectified Linear Unit (ReLU), where the corresponding function  maintains the input dimension and transforms an input vector element-wise by keeping its positive part, i.e., where for .

By interpreting an input element to the ReLU layer as feature intensity, if has value greater than zero, then it is considered to be activated, while having value less or equal to zero is considered to be suppressed by ReLU. With this intuition in mind, our definition of a neuron activation pattern is based on capturing the activation and suppression of features.

Definition 1 (Neuron activation pattern)

Given a neural network with input in and the -th layer being ReLU, , the neuron activation pattern at layer , is defined as follows:

where is the output from layer , and captures the activation cases:

Let denote the set of training inputs and let denote the set of all training images labelled as class  based on the ground truth. For each class , we define the corresponding “comfort zone” for a neural network to be the set of activation patterns visited for all correctly classified training images, together with other neuron activation patterns that are close (via Hamming distance) to visited patterns.

Definition 2 (-comfort zone)

Given a neural network and its training set , the -comfort zone for classifying class , under the condition where the -th layer is ReLU, is defined recursively as follows:

where is the function to compute the Hamming distance between two pattern vectors .

Lastly, a neuron activation pattern monitor stores the computed comfort zone for each class using the training data.

Definition 3 (Neural activation pattern monitor)

Given a neural network for classifying classes, its training set and a user-specified , its neuron activation pattern monitor is defined as .

Note that as , the construction of can be done using binary decision diagrams with variables. Algorithm 1 describes how to construct such a monitor, where bdd.emptySet, bdd.or, and bdd.encode are functions used to create an empty set, to perform set union, and to encode an activation pattern into BDDs. The function performs the existential quantification on set over the -th variable.

In Algorithm 1, lines 4 to 8 record all visited patterns to form . Subsequently, lines 9 to 14 build from . In particular, computing the enlarged  from can be efficiently achieved using the existential quantification operation as listed in line 12. Consider an example where , then the operation , for , creates respectively. The union over existentially quantified result creates an enlarged set containing additional patterns with Hamming distance equal to .

Input: neural network and -th layer to monitor, training set , user specified
Output: runtime activation pattern monitor
/* initialize monitors as empty BDDs */
1 for  do
2      
3 end for
/* iterate all images */
4 for  do
       /* check if prediction is correct */
5       if  then
             /* add activation pattern to the corresponding BDD */
6            
7       end if
8      
9 end for
10for ,  do  ;
11 for  do
12       for  do
13             for  do  ;
14            
15       end for
16      
17 end for
return
Algorithm 1 Building a neuron activation pattern monitor after training

(Neuron selection via gradient analysis) For layers with large neuron amounts, as the use of BDD has practical variable limits around , one extension is to only monitor the activation patterns over a subset of neurons that are important for the classification decision. One way of selecting neurons to be monitored is to apply gradient-based sensitivity analysis similar to the work of saliency map [9]. The underlying principle is that for the output of neuron over neuron producing output class , one computes . Subsequently, one only selects neuron if is large, as the change of value  significantly influences the output  due to the derivative term.

As a special case, if one monitors patterns over the neuron layer immediately before the output layer, and there is no non-linear activation in the output layer (which is commonly seen in practice), is simply the weight connecting  to .

Iii Controlling the Abstraction

As stated in the introduction, the coarseness of abstraction should be carefully designed to make the resulting monitor useful. Both the number of neurons being monitored and the value  are hyper-parameters to control the coarseness of abstraction. We have implemented the concept to examine the effect of different 

using the PyTorch machine learning framework

111Pytorch: https://pytorch.org/ and the python-based BDD package dd222dd: https://pypi.org/project/dd/.

Based on two publicly available image classification datasets MNIST [5] and GTSRB [10], we trained two neural networks. The architectures of the networks are summarized in Table I. After training, we build the runtime monitors based on Algorithm 1. For network 2, in the experiment we (i) only construct the monitor for the stop sign () and (ii) out of neurons in a layer, only  are monitored based on gradient-based analysis. We have gradually increased and recorded the rate of out-of-pattern images for all validation images, as well as the portion of misclassified images within out-of-pattern images.

ID Classifier Model architecture Accuracy (train/validation)
1 MNIST ReLU(Conv(40)), MaxPool,
ReLU(Conv(20)), MaxPool,
ReLU(fc(320)), ReLU(fc(160)),
ReLU(fc(80)), ReLU(fc(40)),
fc(10)
,
2 GTSRB ReLU(BN(Conv(40))), MaxPool,
ReLU(BN(Conv(20))), MaxPool,
ReLU(fc(240)), ReLU(fc(84)),
fc(43)
,
TABLE I: Architectures and accuracies of the networks used in the experiment. Convolutional layers (Conv) have kernel size

and stride

. We use max pooling layers (MaxPool). Fully-connected layers and batch normalization are denoted by and . The layer being monitored is highlighted in bold text.
ID misclassification rate
1
2
TABLE II: Results of applying runtime neuron activation monitoring

For network 1 classifying MNIST, the rates of for all are all relatively small. For network 2 classifying GTSRB, one can argue that the abstraction using is not coarse enough, as the network has a low mis-classification rate (around ) but the monitor reports that around of the images create patterns that are not included in the monitor.

(MNIST with )

If there is no distributional shift in operation, the monitor will not signal problems in (

) of its overall operation time, implying that it is largely silent. Nevertheless, whenever it signals an issue of unseen patterns, apart from arguing that the network is making a decision without prior similarities, one may even argue that there is a non-neglectable probability of

where the decision being made by the network is problematic333The argument is based on an assumption where no distributional shift implies that remains the same in validation and in operation., although the neural network may still report that the input is classified to the class with a high probability.

(GTSRB with )

If there is no distributional shift in operation, the monitor will not signal problem in () of its operating time. Whenever it signals an issue of unseen patterns, there is a non-neglectable probability of where it is indeed misclassified.

(Case Study) We also experimented the runtime monitoring technique on a vision-based front-car detection system for highway piloting. The vision subsystem (cf. Figure 3) contains three components: (1) vehicle detection, (2) lane detection, and (3) front-car selection. The front-car selection unit is implemented using a neural network-based classifier, which takes the lane information and the bounding box of vehicles, and produces either an index of the bounding vehicle or a special class “” for which no forward vehicle is considered to be a front car.

Iv Related Work

Using neural networks in safety critical applications has raised needs for creating dependability claims. Recent results in compile-time formal verification techniques such as RuLUplex [4] or Planet [2] use constraint solving to examine if for all inputs within a bounded polyhedron, it is possible for the network to generate undersired outputs. These techniques are used when a risk property is provided by domain experts beforehand, and they are only limited to piecewise linear networks with a few number of neurons. Our work of neuron monitoring is more related to the concept of runtime verification [6], which examines if a runtime trace has violated a given property. The generalizability condition, as defined by the -comfort zone created after training, can be understood as a safety property. To the best of our knowledge, we are unaware of any work in runtime verification that considers the problem of generalizability monitoring of neural networks. In terms of scalability, our framework also allows taking arbitrary large networks with other nonlinear activation functions, so long as the neurons being monitored are ReLU.

Lastly, within machine learning (ML), the work of filtering adversarial attacks [3, 11] reply on creating another ML component to perform detection (thus preventing the network from making wrong decisions). Our proposed method differs from these ML-based approach in that the sound over-approximation of the visited inputs implies that if the monitor reports the occurrence of an unseen pattern, the occurrence is always genuine. The sure guarantee (in contrast to concepts such as almost-sure444https://en.wikipedia.org/wiki/Almost_surely which is the best one can derive with statistical machine learning methods) makes the certification of such a monitor in the safety domain relatively easier. In particular within the domain of autonomous driving, it is highly likely that the test set used in engineering time will deviate from the real world data (the black swan effect), making any probabilistic claim hard to be certified.

Fig. 3: High-level architecture of a front-car detection unit for a highway piloting system.

V Concluding Remarks

In this paper, we proposed neuron activation pattern monitoring as a method to detect if a decision made by a neural network is not supported by prior similarities in training. We envision that a neuron activation pattern monitor can be served as a medium to assist the sensor fusion process on the architecture level, as a decision made by the network may not be fully trusted due to no ground-truth being offered in operation time.

The established connection between formal methods and machine learning also reveals several possible extension schemes. (1) The technique shall be directly applicable on object detection networks such as YOLO [8], whose underlying principle is to partition an image to a finite grid, with each cell in the grid offering object proposals. (2) We are also studying the feasibility on more refined domains using tools such as difference bound matrices [7], in order to better capture an abstract representation of the visited activation patterns.

References