Deep Neural Networks (DNNs) are now used in a variety of domains, including speech processing , NLP , medical diagnostics , image processing , robotics  and even reconstruction of brain circuits . The power and accuracy of DNNs have led to deployments of Deep Learning (DL) systems in safety- and security-critical domains, including self-driving cars , malware detection  and aircraft collision avoidance systems . Such domains have a low tolerance for mistakes. The software systems in a self-driving car, for example, must have high assurance in deployment.
Unfortunately, the stochastic nature of DL virtually ensures that DL models will not achieve 100% accuracy, even on the training dataset. Since in mission-critical applications a wrong DNN decision could be costly, we believe that such applications must include logic to (1) check the trustworthiness of a DNN’s output, and (2) raise an alarm when there is low confidence in the output. Our community has developed such methods for programmed components [3, 29, 26] and now is the time to do so for learned ones like DNNs.
Trustworthiness of a simple DNN can be measured with softmax probabilities, or information theoretic metrics, such as entropy  and mutual information 45, 6]38]
on CIFAR-10, we found that 75% of predictions that were incorrect had maximum softmax probabilities over 70%; and 63% incorrect predictions had maximum softmax probabilities over 80%. We had similar results on other datasets and models. This illustrates the unreliability of the softmax probabilities as confidence estimators of the final prediction.
Our goal is to build a general-purpose system that monitors a deployed DNN’s predictions during inference, raises an alarm if there is low confidence in the predictions, and provides an alternative prediction that we call an advice. A key challenge in building such a system is finding a source of additional information to check DNN outputs. The inspiration for our work comes from Kaya et al., who study internal DNN behavior . They found that a DNN can reach a correct prediction before
the final layer. In fact, the final layer of a DNN may change a correct internal prediction into an incorrect prediction. This work illustrates that features extracted from internal layers of a DNN contain information that can be used to cross-check a model’s output.
Inspired by Kaya et al.’s work, we define self-checking as a process by which internal DNN layer features are used to check DNN predictions. In this paper we describe a novel self-checking system, called SelfChecker, that triggers an alarm if the internal layer features of the model are inconsistent with the final prediction. SelfChecker also provides advice in the form of an alternative prediction. SelfChecker assumes that the training and validation datasets come from a distribution similar to that of the inputs that the DNN model will face in deployment.
SelfChecker uses kernel density estimation (KDE) to extrapolate the probability density distributions of each layer’s output by evaluating the DNN on the training data. Based on these distributions, the density probability of each layer’s outputs can be inferred when the DNN is given a test instance. SelfChecker measures how the layer features of the test instance are similar to the samples in the training set. If a majority of the layers indicate inferred classes that are different from the model prediction, then SelfChecker triggers an alarm. In addition, not all layers can contribute positively to the final prediction. SelfChecker therefore uses a search-based optimization to select a set of optimal layers to generate a high quality alarm and advice.
We evaluated SelfChecker’s alarm and advice mechanisms with experiments on four popular and publicly-available datasets (MNIST, FMNIST, CIFAR-10, and CIFAR-100) and three DNNs (ConvNet, VGG-16, and ResNet-20) against three competing approaches (SelfOracle , Dissector , and ConfidNet ). Our results show that SelfChecker achieves the highest F1-score (68.07%), which is 8.77% higher than the next best approach (ConfidNet). Our evaluation of SelfChecker’s DNN prediction checking runtime shows an acceptable time overhead of 34.98ms. We also compared SelfChecker to the state-of-the-art approach for self-driving car scenarios (SelfOracle ), and found that SelfChecker triggers more correct alarms and a comparable number of false alarms.
Our paper makes the following three contributions:
We present the design of SelfChecker, which uses density distributions of layer features and a search-based layer selection strategy to trigger an alarm if a DNN model output has low confidence. We show that SelfChecker achieves better alarm accuracy than previous work.
Unlike existing work, SelfChecker provides advice in the form of an alternative prediction. We find that models on a 10-class dataset can use this advice to achieve higher prediction accuracy.
We demonstrate the effectiveness of SelfChecker’s alarms and advice on publicly available DNNs, ranging from small models (ConvNet) to large and complex models (VGG-16 and ResNet-20), and self-driving car scenarios. Our implementation is open-source111https://github.com/self-checker/SelfChecker.
Ii Background and Motivation
In a deep neural network (DNN), an input is fed into the input layer, then passed through a series of hidden layers that extract features from the input using activation functions attached to neurons, and the process concludes with the output layer, which uses the extracted features to output a prediction using eitherclassification (from a categorical set of classes) or regression
(in the form of real-valued ordinals). The behavior of a layer during inference thus can be characterized by its vector of neuron activation outputs. In what follows, we refer to these layer-wise vectors of activation outputs as thelayer features analyzed by our approach.
Ii-a The Promise of Using Layer Features
DNNs make decisions based on features extracted from training data. But how can we judge if a model is making a wrong decision for a given test instance? One way is to check whether the model has previously observed a similar instance during training. This raises the question of how to define the similarity between a test instance and a training instance . Most existing studies use a distance-based measure , such as
or cosine similarity. We think this is problematic since the inputs are complex enough and need DNNs to extract features, so we doubt that a distance measure defined directly on the inputs can properly capture similarity.
Instead, we use the features of the inputs extracted by internal layers in DNNs to capture similarity. Specifically, we define the similarity as the likelihood of the DNN having seen a similar layer features during training. We use probability density distributions extrapolated from the training process to measure the similarity between layer features of a given input and those observed for training data.
Fig. 1 presents a motivating example where a Convolutional Neural Network (CNN) with three convolutional layers trained on MNIST is used to classify images of digits 3 and 6, while outputting labels “3” and “1” as the respective predictions. To visualize where the features of each layer focus, we apply Grad-CAM  to highlight the attention heatmap on the original images as shown in the bottom two rows of images in Fig. 1. The heatmap images show that different layers have different points of focus. For example, the first and second images of digit 3 are similar to 3 itself, but the third image is closer to digit 2. Similarly, the first image of digit 6 is similar to digit 1, but the second and third images are similar to 6.
Although the CNN misclassifies the second image, in both cases the images appear to be recognized correctly by one or more hidden layers. This example thus illustrates the promise of using layer features to check the model’s classification of a test instance.
DNNs exist in many variants and can be combined to form more complex models. For example, models used in urban flow prediction [32, 25] combine convolutional, graph and recurrent neural nets. However, all these DNNs extract features using internal layers, and that is the focus of our research.
The design we present targets DNN classifiers with convolutional layers and fully-connected layers. Our system also works for regression networks by transforming the network into a binary classification problem. Since our design uses layer features, it should work on other types of DNNs, such as recurrent neural networks. We leave the evaluation of our system on other DNN types to future work.
Ii-B The Challenges of Using Layer Features
The preceding example also raises two challenges that a technique using layer features must resolve:
Which layers should be selected for checking the classification of a test instance? For example, does selecting more layers lead to a better checker?
How should the features from the different layers be aggregated — either to determine if an alarm should be raised, or to produce alternative advice?
Resolving these questions is the goal of this paper.
Problem statement. Given a trained DNN classifier and a test instance, we aim to develop a systematic method called SelfChecker for determining whether the DNN will misclassify the test instance, based on extensive checking the DNN’s internal features. First, SelfChecker should trigger an alarm if it detects a potential misclassification of the test instance. Second, and going beyond the previous studies [40, 46, 6], SelfChecker should provide advice once an alarm is triggered, in the form of an alternative classification. Our goal is for SelfChecker to achieve high accuracy in both triggering alarms and offering advice.
Iii Design of SelfChecker
The goals of SelfChecker are (1) to check a DNN’s prediction, (2) to raise an alarm if the DNN’s prediction is determined to be incorrect, and (3) to provide an advice, or an alternative prediction.
SelfChecker’s training module is used after the model has been trained to configure SelfChecker’s behavior in deployment. The training module uses the training and validation datasets, as well as the trained model to generate a deployment configuration.
SelfChecker’s deployment module runs along with the inference process: it analyses the internal features of a DNN when the model is given a test instance and provides an alarm as well as an advice if it detects an inconsistency in the model’s output. To detect these inconsistencies, the deployment module uses the configuration supplied to it by the training module.
Note that although SelfChecker analyses the features extracted from the internal layers of a DNN, the training module is independent from the architecture of the model and requires no model modifications or retraining. The deployment module, however, is specific to a DNN.
Fig. 2 overviews our approach. Given a DNN model M trained on training dataset and validated on validation set , for each layer in M, SelfChecker’s training module first (1) computes layer-wise density distributions of each class using kernel density estimation (KDE)  on (Section III-A). Based on the distributions, (2) SelfChecker can estimate the density values of each validation or test instance on each class. The higher the values of the class, the more similar the features of the instance in this layer are to the specific class. After SelfChecker obtains all of estimated density values on across all layers, SelfChecker (3) finds the optimal layer combinations to reach the best alarm and advice accuracy. Since different classes produce distinctive feature behaviors in different layers, SelfChecker uses global search to find the optimal layer combinations per class (Section III-B).
Finally, when the model is presented with a test instance in deployment, SelfChecker’s deployment module decides whether to provide an alarm as well as an advice by using (4) the density values and (5) specific layer combinations (Section III-C). We now detail each step in our approach.
Iii-a KDE of the Training Set
Given a trained classifier with layers (except for the input layer) and classes, let and in be the set of training inputs and corresponding ground truth labels. Similarly, let , , and in be the validation inputs, corresponding ground truth labels, and model predictions.
We denote the outputs of all layers in the training set as feature vectors , where the feature vectors of the layer with neurons are . We note that the feature vectors are trivially available after each execution of the trained model over a given input. In general, focuses on different features in different layers for different classes. SelfChecker’s aim is to compute the density probability of feature vectors in each layer for each class based on the training set . Using these density probabilities SelfChecker will then estimate how close the features in a specific layer (for a certain input) are to those of the training set.
KDE is a non-parametric method for estimating a probability density function by using a finite number of samples from a population[7, 43]
. The resulting density function allows the estimation of relative likelihood of a given random variable. In this paper we use the Gaussian kernel, which works well for the multivariate data common to most datasets and produces smooth functions. Given a data sample, SelfChecker estimates the kernel density function as follows:
where is the Gaussian kernel function and is bandwidth.
To see how a KDE with Gaussian kernels works, consider Fig. 3. First, each observation in the sample is replaced with a Gaussian curve centered at that value (green curves); these work as a kernel. The green curves are then summed to compute the value of the density at each point. Fig. 3(b) also shows the normalized curve (in blue) whose area under the curve is 1. The bandwidth parameter of the KDE controls how tightly the estimate is fit to the sample data. It corresponds to the width of the kernels (green lines in Fig. 3(b)). Fig. 3(c) shows that if is large, the curve is smooth but flat. And, if is small, the curve is peaked and oscillating. The choice of is based on the number of sample points and their dimensions.
For each combination of class and layer, SelfChecker uses Gaussian KDE to estimate the density function that the training data for the class induces on the layer’s feature vector. Then given a test instance, SelfChecker estimates the probability density for each class within each layer from the computed density functions. Finally, SelfChecker uses these probability densities to infer classes for each layer, defined as follows:
Definition 1 (Inferred class for a layer)
Given a test instance, the inferred class for layer is the class for which the test instance induces the maximum estimated probability density among ’s per-class density functions.
Algorithm 1 details SelfChecker’s procedure for KDE estimation and inference. Lines 1-10 show the Gaussian KDE
used to extrapolate the density distribution functions of feature vectors per class in each layer. As illustrated with Fig. 1, we want to extrapolate the patterns of the attention overlaid on the raw input. Since the input instances with different classes perform differently in different layers, the attentions in the first layer of digit 3 are different from the first one of 6 that is also different from the second one of 6 itself. SelfChecker therefore splits the original training input instances according to their true classes (Line 3). Based on these it obtains the outputs of each layer given the trained model (Line 5). SelfChecker also uses mean-pooling to reduce dimensions for convolutional layers and then filters out neurons whose values show variance lower than a pre-defined threshold, , to reduce the dimension of feature vectors as these neurons do not contribute much information to the KDE (Lines 6). SelfChecker then uses the filtered feature vectors to extrapolate the density functions for each layer and class, and stores them (Lines 7-8) so that they can be used for inference on new examples, such as (Lines 11-21).
During inference on a given input instance, SelfChecker first obtains the outputs in each layer (Line 14), from which it removes the values of the neurons filtered in Line 6 (Line 15). It then generates the estimated density values of each class, given the corresponding KDE functions (Lines 16-18). Finally, the layer inference for the input instance is the class that has the maximum density value (Line 19), which indicates that the feature vectors of the input instance in this layer are close to those in training set that belong to this specific class. For instance, in Fig. 1, the class inferences given by Algorithm 1 in the three layers are 3, 3, 2 for digit 3, and 1, 6, 6 for digit 6, respectively.
Iii-B Layer Selection
In Section II we noted that different layers have different attentions, but some of these focus on a particular part of the image and may be misleading. For example, in Fig. 1 the second and third layers for 6 are different from the final prediction. If SelfChecker would consider the outputs of these layers, it can detect that the model is not confident about the final output. And, if SelfChecker considers just these layers and uses maximum voting, then it can also provide an alternative prediction that correctly classifies this image. Therefore, the design of robust layer selection in SelfChecker is important to accurately raise an alarm and to provide a high quality advice.
We first explain what we mean by a model output’s confidence. Our definition is based on an observation: given a test instance, if the features of DNN layers are different from the final prediction, then the decision made by the model on the test instance will tend to be incorrect. For example, in Fig. 1 the attentions in the second and third images of a 6 are more similar to those of a 6 instead of the final prediction of 1. In this case the model misclassifies the 6 as 1. We evaluated this observation by using Spearman rank-order correlation coefficient and p-values . Spearman rank-order measures the relationship between the prediction correctness and the consistency of inferred layer classes and final predictions. Our results show that they are correlated with p-value much less than 0.05 (at most 3.09e-26) on all evaluated four image datasets and three DNN models listed in Table I.
We formally define the confidence () of a model output () given a test instance as follows:
where is the number of selected layers whose inferred class is the same as the final prediction and is the number of selected layers for the class . Based on the maximum voting, if is lower than 0.5, we say that a DNN has low confidence in prediction for a test instance .
We now discuss how SelfChecker selects the proper layer combinations for each class to reach a high alarm accuracy (Algorithm 2). We use the training set to estimate the density function, from which the inferred class for each layer can be obtained for a given input instance. As mentioned in Section II, different layers have different attentions but some of these may be misleading, we thus use the validation dataset to select layers.
Given the validation dataset , SelfChecker splits the input instances into subsets based on their predictions (Line 2). SelfChecker then generates all possible layer combinations with lengths in range 1 through , from which it searches for the best combination for each class to reach the highest accuracy (Lines 4-17). To calculate the alarm accuracy, SelfChecker first obtains the inferred class of each layer in the given layer combination (Lines 5-7) based on the generated KDE inferences across all layers on () by Algorithm 1. To conclude whether or not the model has made a wrong prediction for an input, SelfChecker considers the layers in the layer combination. If a majority of the layers indicate inferred classes that are different from the model prediction (the confidence is less than 0.5), then SelfChecker concludes that the model is wrong (Line 8). In this case, if the model prediction is indeed different from the true label of this input, the alarm is correct (True Positive), otherwise, it is incorrect (False Positive). SelfChecker uses the F1-score to measure the alarm accuracy (Lines 10-13), and it selects the layer combination with the highest accuracy for the corresponding class (Lines 14-16).
After selecting the layer combinations for the alarm, SelfChecker must determine the layer combinations that give a good advice whenever SelfChecker raises an alarm about a prediction. Algorithm 3 details SelfChecker’s procedures for layer selection to achieve the best advice accuracy. First, SelfChecker splits the validation set into subsets (Line 2), and for each subset it searches for the best layer combination. Given the layers selected for alarms by Algorithm 2, SelfChecker generates the KDE inferred classes in these layers as in Lines 5-7 in Algorithm 2. Given a test instance, if the confidence of the model prediction () is less than 0.5, SelfChecker concludes that the model misbehaved (Line 5). SelfChecker then searches for the best layer combination where the model predicts the input with label as (Lines 9-10). Since not all classes have correlation, SelfChecker obtains weights for different combinations (Lines 11-15). For example, 1 is prone to be misclassified as 7 but has little chance to be misclassified as 2. Subsequently, in Lines 17-19, SelfChecker finds the layer combination that achieves the highest accuracy for the case where the selected layers by Algorithm 2 indicate a negative decision (the model behaves normally).
Boosting strategy: SelfChecker searches for both positive and negative decisions made by the selected layers in Algorithm 2 in order to boost the quality of the alarm. In particular, if the layers selected by Algorithm 2 indicate an alarm but the advice given by (Line 10) is the same as the model prediction, then SelfChecker does not raise an alarm. Similarly, if the layers selected by Algorithm 2 indicate that the model prediction is correct but the advice given by (Line 19) is different from the model prediction, SelfChecker will raise an alarm.
Iii-C Checking the Model in Deployment
SelfChecker checks a trained DNN in deployment. It raises an alarm if it disagrees with the model’s prediction of a given test instance and also generates an advice (alternative prediction). Algorithm 4 presents this process.
First, SelfChecker generates inferred classes of all layers using layer outputs and KDE functions obtained from Algorithm 1. Then, as in Lines 5-7 in Algorithm 2, SelfChecker generates consisting of inferred classes given the selected layers for . If the output class is not inferred in the majority of cases in , then SelfChecker has an initial alarm that still needs to go through the boosting strategy (mentioned in the last section).
Lines 5-18 show that SelfChecker first generates the probabilities of each class given , which are weighted by . If the class with the largest probability is still different from the model prediction , SelfChecker triggers the alarm and it selects the class with the largest probability as the advice. Otherwise, SelfChecker does not trigger the alarm. A similar strategy is used if the alarm is not triggered initially where the output class is inferred in the majority of cases in .
In this section we present experimental evidence for the effectiveness of SelfChecker. The goal of our evaluation is to answer the following research questions.
|Dataset||# Class||# Train||# Valid||# Test||DL models|
|# Layers||Accuracy%||# Layers||Accuracy%||# Layers||Accuracy%|
ResNet-20 and ConvNet are seldom used for MNIST and CIFAR-100. We omit their results due to space limitation but we will release them with our code. DAVE-2 and Chauffeur for self-driving cars are regression models so we exclude them in this table.
Iv-a Research Questions
RQ1. Alarm Accuracy: How effective is SelfChecker in predicting DNN misclassifications in deployment?
To evaluate the effectiveness of SelfChecker for raising alarms in deployment, we compare its alarm accuracy on the test dataset with related techniques, namely, SelfOracle , Dissector , and ConfidNet . For the comparison, we chose the variant from SelfOracle
—the VAE (variational autoencoder)—that achieved the best performance against otherSelfOracle variants, with confidence threshold of 0.05. Since Dissector did not provide the threshold for distinguishing beyond-inputs from within-inputs, we used the validation dataset to choose a threshold in the range with the highest F1-score and the best weight growth type from linear, logarithmic, and exponential defined in  with the highest Area Under Curve (AUC) for each dataset and DNN classifier. We also used the validation dataset to find the best threshold of failure prediction for ConfidNet to reach the highest F1-score.
RQ2. Advice Accuracy: Does the advice given by SelfChecker improve the accuracy of a DNN?
In cases where SelfChecker raises an alarm about a model prediction, we also determine whether it can provide an advice and the accuracy of this advice. To answer this question, we compare the advice accuracy of SelfChecker against the accuracy of the original DL model . For self-driving cars, we use the dataset released by SelfOracle. This dataset only includes anomalous/normal labels, which is not enough to provide realistic advice , such as turning right/left.
RQ3. Deployment Time: What is the time overhead of SelfChecker in deployment for a given test instance?
We consider what different algorithms do in deployment and evaluate the computation time of their deployment-time components. SelfChecker performs DNN computation, KDE inferences, and alarm and advice analysis. SelfOracle uses the reconstructor to compute a loss and anomaly detector. Dissector generates probability vectors and performs validity analysis222By contrast, Wang et al.  only include validity analysis. We believe that the probability vector generation must also be performed during deployment, since this is the input to validity analysis.. ConfidNet computes an output using two DNNs.
RQ4. Layer Selection: Does the choice of layers for selection by SelfChecker have an impact on its alarm accuracy?
Kaya et al.  characterized "over-thinking" as a prevalent weakness of DL models, which occurs when a DL model can reach correct predictions before its final layer. Over-thinking can be destructive when a correct prediction within hidden layers changes to a misclassification at the output layer (see Section II). Therefore, it is important to select proper layers for different classes. To evaluate the impact of layer selections on the alarm accuracy, we experimented with three layer selection strategies as discussed in Section IV-C: RQ4.
RQ5. Boosting Strategy: Does the boosting strategy improve SelfChecker’s alarm accuracy, particularly in terms of decreasing the number of false alarms?
Iv-B Experimental Setup
We evaluate SelfChecker on four popular datasets (MNIST , FMNIST , CIFAR-10 , and CIFAR-100 ) using three DL models (ConvNet , VGG-16 , and ResNet-20 ). We also compare the alarm accuracy of SelfChecker against SelfOracle  for self-driving car scenarios evaluated on two publicly-available DL models, NVIDIA’s DAVE-2  and Chauffeur . To reduce the possibility of fluctuation due to randomness, we ran all experiments involving MNIST, FMNIST, CIFAR-10, and CIFAR-100 three times and computed the average of all metrics. For the experiments involving the driving datasets, we ran each experiment just once, since we used pre-trained models released by the authors of SelfOracle . We conducted all experiments on an Ubuntu 18.04 server with Intel i9-10900X (10-core) CPU @ 3.70GHz, one RTX 2070 SUPER GPU, and 64GB RAM.
|Dataset||DL||TPR %||FPR %||F1 %|
SO, DT, CN, and SC stand for SelfOracle, Dissector, ConfidNet, and SelfChecker, respectively.
Datasets and DL models. Table I lists the number of classes and the number of training, validation, and test instances in each dataset, as well as the number of layers and the testing accuracy of all trained DL models. These datasets are widely used and each is a collection of images. ConvNet, VGG-16, and ResNet-20 are commonly-used DL models whose sizes range from small to large, with the number of layers ranging from 8 to 20. Table I presents the accuracy of each model we obtained for each dataset; these accuracies are similar to the state-of-the-art. As mentioned in Section III, SelfChecker has a training module and a deployment module. The training and validation dataset were used in the training module, and the test dataset were used on the deployment module to evaluate the performance of SelfChecker.
For our experiments with NVIDIA’s DAVE-2  and Chauffeur  for self-driving cars, we used the dataset and pre-trained models released by the authors of SelfOracle. There are 37,947 training images, 9,486 validation images and 134,820 testing images for DAVE-2 and 250,830 for Chauffeur. The testing images are collected by the self-driving car respectively equipped with the two trained DL models. The collection process stops when the car has collisions or out-of-bound episodes. Therefore, the testing images are different for the two DL models. DAVE-2 contains five convolutional layers followed by three fully-connected layers, while Chauffeur consists of six convolutional layers followed by one fully-connected layer.
Configurations. As discussed in Section III, we filter out neurons whose activation values show variance lower than a pre-defined threshold ( in Algorithm 1), as these neurons do not contribute much information to the KDE. For all research questions, the default variance threshold is set to , and the bandwidth for KDE is set using Scott’s Rule  based on the number of data points and dimensions.
Metrics. Given the KDE inferences of the selected layers, if more layers disagree than agree with the model output, SelfChecker triggers an alarm. We compute the confusion metrics (TP, FP, TN, and FN) as our measurement. Consequently, a True Positive (TP) is defined when SelfChecker triggers an alarm to predict a misclassification where the model output is indeed wrong. Conversely, a False Negative (FN) occurs when SelfChecker does not trigger an alarm on a real misclassification by the model. A False Positive (FP) represents a false alarm by SelfChecker, whereas True Negative (TN) cases occur when SelfChecker is silent on correct classifications. Our goal is to achieve (1) a high true positive rate (TPR = TP / (TP+FN)), (2) a low false positive rate (FPR = FP / (TN+FP)), and (3) a high F1-score (F1 = (2 * TP) / ((2 * TP) + FN + FP)).
Iv-C Results and Analyses
We now present results that answer our research questions.
RQ1. Alarm Accuracy
Table II presents the alarm accuracies of three DL models (ConvNet, VGG-16, and ResNet-20) in deployment on four datasets (MNIST, FMNIST, CIFAR-10, and CIFAR-100) checked by SelfOracle, Dissector, ConfidNet and SelfChecker, and the alarm accuracies of two self-driving car DL models checked by SelfChecker and SelfOracle , in terms of TPR, FPR and F1-score. Fig. 4 shows the average confusion metrics of all datasets and DL models. SelfChecker can always trigger more correct alarms (TP) and miss fewer true alarms (FN) than SelfOracle and ConfidNet.
On traditional DNN classifiers, SelfChecker correctly triggers an alarm on over half of the misclassifications (average TPR 60.56%), which is much higher than that of SelfOracle (average TPR 10.65%) and ConfidNet (average TPR 54.30%), and comparable to Dissector (average TPR 60.13%). In particular, the highest TPR of SelfChecker is 84.22%; this means that over 80% of misclassifications can be detected by SelfChecker. However, there are four cases on which Dissector achieves higher TPR. Similar to SelfChecker, Dissector also benefits from the internal layer features. It builds several sub-models that are retrained on top of internal layers. Therefore, additional information may be learned by the training process that SelfChecker lacks. But, SelfChecker outperforms Dissector on TPR in the majority of cases, which indicates that the additional information is limited. Significantly, SelfChecker outperforms SelfOracle, which has no internal information and ConfidNet, which only considers high-level representations on all datasets and DNN classifiers on TPR. We thus conclude that the internal layer features obtained by SelfChecker are important to detecting misclassifications. On the other hand, SelfChecker achieves lower FPR than all the competitors. The low FPR indicates that SelfChecker triggers few false alarms. This is expected since the boosting strategy (Section III-B) makes SelfChecker very prudent in triggering alarms. Finally, SelfChecker has a higher F1-score than all the competing approaches with an average values of 68.07% against 10.25%, 57.83%, and 59.30% for SelfOracle, Dissector, and ConfidNet, respectively. The reason SelfOracle
has worse accuracy on traditional DNN classifiers is that it is tailored for time series analysis on video frame sequences that change little over short periods of time. ConfidNet is trained on top of the original DL model whose weights of feature extraction are frozen using the training dataset and it uses the loss function based on true class probability. Since there are few wrong predictions in the training dataset after the original model is trained, overfitting leads to limited performance of ConfidNet. Note that the results of ConfidNet shown in TableII are different from those in  since our study regards wrong predictions as positive cases (discussed in Metrics in Section IV-B) while  regards correct predictions as positive cases.
In the self-driving car scenarios, we transformed the regression network that predicts steering angles into a binary classification network that classifies steering angles as either normal or anomalous. Since the true class probability is the base of ConfidNet, and the first and second highest class probabilities are necessary for Dissector
, both of these cannot be used in the self-driving car scenarios. Given the validation dataset, a Gamma distribution is fitted to the errors between the predictions and the real-valued angles (MSE), and density values of each layer generated by Algorithm1, respectively. Given an value of 0.05 (the same as used in SelfOracle) from the Gamma fitting distribution, if the error of an instance in the validation dataset is larger than the value corresponding to , it is labeled as an anomaly. Similarly, if the density value is less than the values corresponding to , it is predicted as an anomaly. We then use SelfChecker to solve the regression problem as a binary classification problem. Table II shows that SelfChecker achieves a higher TPR than SelfOracle on both DAVE-2 and Chauffeur, indicating that SelfChecker can trigger more correct alarms. Even though SelfChecker triggers more false alarms for DAVE-2, it also triggers more true alarms (201 against 156 by SelfOracle) and misses only 2 true alarms. In addition, the F1-score for SelfChecker is higher than for SelfOracle on both models.
For RQ1, we conclude that SelfChecker effectively triggers alarms that predict misbehaviors of DL models in deployment with high TPR and low FPR.
RQ2. Advice Accuracy
Table III compares the accuracies of the original model to those of having advice provided by SelfChecker. Even though SelfChecker achieves high alarm accuracies, it is challenging for it to provide correct advice as we regard the advice as correct only if the inferred classes of most selected internal layers are the same as the true label. This condition is more strict than triggering an alarm that requires the inferred classes of most selected internal layers to be different from the model’s prediction.
SC stands for SelfChecker.
Our results show that even though the trained DL models have achieved state-of-the-art accuracies, the advice can still improve model’s prediction accuracy by about 0.138% for datasets with 10 classes but decrease the prediction accuracy by about 0.65% for datasets with 100 classes. There are two reasons for this. First, finding a correct prediction from 100 classes is a harder problem. Second, the validation set per class is more limited: CIFAR-10 has 1000 samples per class but CIFAR-100 only has 100 samples per class. We empirically find that SelfChecker’s advice can improve model’s prediction accuracy when the number of samples per class is over 200. The results also show that the advice provided by SelfChecker can improve the prediction accuracy at most 0.34% without retraining with additional inputs or changing the architecture. Even though this difference is small, for a safety-critical domain such as self-driving cars, which make tens of decisions per second, a difference of 0.2% in 10,000 decisions translates to 20 fewer misclassifications.
For RQ2, we showed that SelfChecker’s advice can improve the accuracy of the original models beyond their state-of-the-art performance with a sufficiently large validation dataset.
RQ3. Deployment Time
We measured the average time that it takes a method to check a model’s inference on a single input. Table IV lists the average times for all the datasets in Table II for each DNN classifier. The results for DAVE-2 and Chauffeur are for their corresponding self-driving datasets. SelfOracle and ConfidNet take the least time since they use an additional DL model and their deployment checking time is the time it takes for two DL models to compute their outputs. However, these methods have alarm accuracies that are lower than Dissector and SelfChecker. Dissector takes longer than SelfChecker (average of 50.47ms vs 34.98ms) on traditional DNN classifiers.
SO, DT, CN, and SC stand for SelfOracle,
Dissector, ConfidNet, and SelfChecker, respectively
We believe that these checking times are acceptable across a variety of application domains. As is, SelfChecker can be used for applications ranging from medical image-based diagnosis to airport security screening. For real-time applications (e.g., autonomous driving), the latency of SelfChecker and SelfOracle needs to improve. The checking time in the self-driving car scenarios is high because 32 frames must be analyzed before raising an alarm. Efficiency is not this paper’s focus, but we acknowledge its importance for cyber-physical systems. We plan to parallelize SelfChecker by using a process per class density function to decrease latency by 1/(number of classes).
RQ4. Layer Selection
As discussed in Section III-B, we use search-based optimization to select suitable layers for improving alarm accuracy. We present the results of checking VGG-16 on FMNIST and Chauffeur on the self-driving car dataset in Table V; we omit results for the other models and dataset since they have similar properties. We evaluate three layer selection strategies for triggering alarms and compare them in terms of alarm accuracy. The first strategy involves random selection of layers for each class, with the number of layers selected for each class being the same as the number selected using our approach, in order to make a fair comparison. The second strategy uses the full set of layers. The third strategy is our own approach described in Section III-B, which selects suitable layers based on the validation dataset. To ensure a fair comparison, none of the strategies use the boosting strategy.
SC-layer stands for SelfChecker’s layer selection.
The results in Table V indicate that SelfChecker’s layer selection strategy always achieves the highest TPR and F1-score compared to random selection and full selection. Even though using all layers to decide whether triggering an alarm achieves lower FPR than our approach, it sacrifices the number of correct alarms by 108 and 17 for FMNIST and driving dataset, respectively. Therefore, selecting more layers does not lead to a better checker.
For RQ4, we conclude that a careful selection of layers allows SelfChecker to identify more misclassifications and raise more correct alarms.
RQ5. Boosting Strategy
Table VI presents the alarm accuracies of SelfChecker both with (SC) and without (SC-b) the boosting strategy described in Section III-B, for ResNet-20 on FMNIST and CIFAR-100; we omit results for the other models and dataset since they have similar properties. As indicated in Table VI, adopting the boosting strategy achieves much lower FPR (the lower the better) than SC-b, with larger F1-score (the higher the better).
CIFAR stands for CIFAR-100
For RQ5, we showed that the boosting strategy significantly improves alarm accuracy by reducing false alarms.
V Related Work
Most studies that check DL model trustworthiness focus on the process of model engineering: generate adversarial test instances [9, 51, 30, 31, 47, 23], increase test coverage [44, 27, 34], and improve robust accuracy [28, 18]. Unlike our work, which checks the model in production, these approaches rely heavily on manually supplied ground truth labels. Our focus is on non-adversarial inputs, which require different considerations . We plan to consider adversarial inputs in our future work.
SelfChecker’s performance will depend on the difference in distribution. We conducted preliminary experiments by slightly changing the testing dataset with random noise to push the dataset embeddings of the first fully-connected layer after all convolutional layers away from the training dataset. In this setup, SelfChecker performs similarly to the normal in-distribution dataset. Besides, there are existing studies detecting out-of-distribution data [24, 22, 14]. For example, recent work  uses temperature scaling and an input preprocessing strategy to make the max class probability a more effective score for detecting out-of-distribution data. Such studies are complementary to SelfChecker: they could first check for the input being out-of-distribution, and then SelfChecker can check the prediction. In addition, our problem cannot be subsumed by confidence calibration. As stated in ConfidNet , confidence calibration helps to create confidence criteria but ConfidNet’s focus is failure prediction. Comparing SelfChecker against a technique with temperature scaling is inappropriate because using temperature scaling to mitigate confidence values doesn’t affect the ranking of the confidence score on different classes and therefore cannot separate errors from correct predictions.
In the SE community, several studies consider checking a DL model’s trustworthiness in deployment. SelfOracle, proposed by Stocco et al. , estimates the confidence of self-driving car models. In their work, an alarm is triggered if the confidence of the model output is lower than a pre-defined threshold, in which case a human is then involved. It is designed for the scenario in which inputs are temporally ordered, such as video frames. Its performance is limited on other DNN types (see Section IV). Wang et al.  propose Dissector to detect inputs that deviate from normal inputs. It trains several sub-models on top of the pre-trained DL model for validating samples fed into this DL model. But the generation of sub-models is manual and time-consuming, and Dissector does not provide an explicit design of the threshold for distinguishing inputs, which depends on the model and dataset. In the DL community, researchers have developed new learning-based models to measure confidence [20, 8, 33, 15, 6]. These models may also be untrustworthy and may suffer from, e.g., overfitting. In [33, 15], nearest-neighbor classifiers are built to measure the model confidence. A clear drawback of both approaches is the lack of scalability, since computing nearest neighbors in large datasets and complex models is expensive. Corbière et al.  propose a new confidence model, namely ConfidNet, on top of the pre-trained model to learn the confidence criterion based on True Class Probability for failure prediction, which outperforms  in both effectiveness and efficiency. But its performance is limited due to overfitting since it is trained on the training dataset where there are few wrong predictions. Except for , which cannot scale to large datasets and models, none of the above papers provide alternative advice. In contrast, SelfChecker achieves both high alarm and advice accuracy (with sufficient validation data per class) using internal features extracted from the DNN.
Vi Limitations and conclusion
Limitations. SelfChecker builds on an assumption that the density functions and selected layers determined by the training module can be used to check model consistency in deployment. This assumption depends on whether the training and validation datasets are representative of test instances. SelfChecker is a layer-based approach that requires white-box access and will have more limited power on shallow DNNs with few layers.
Conclusion. To be used in mission-critical contexts, DNN outputs must be closely monitored since they will inevitably make mistakes on certain inputs.
In this paper we hypothesized that features in internal layers of a DNN can be used to construct a self-checking system to check DNN outputs. We presented the design of such a general-purpose system, called SelfChecker, and evaluated it on four popular publicly-available datasets (MNIST, FMNIST, CIFAR-10, CIFAR-100) and three DNNs (ConvNet, VGG-16, ResNet-20). SelfChecker produces accurate alarms (accuracy of 60.56%), and SelfChecker-generated advice improves model accuracy on the 10-class dataset by 0.138% on average, within an acceptable deployment time (about 34.98ms). As compared to alternative approaches, SelfChecker achieves the highest F1-score with 68.07%, which is 8.77% higher than the next best approach (ConfidNet). In the self-driving car scenarios, SelfChecker triggers more correct alarms than SelfOracle for both DAVE-2 and Chauffeur models with a comparable number of false alarms. SelfChecker is open source: https://github.com/self-checker/SelfChecker.
This work was supported in part by the National Research Foundation, Singapore and National University of Singapore through its National Satellite of Excellence in Trustworthy Software Systems (NSOE-TSS) office under the Trustworthy Software Systems – Core Technologies Grant (TSSCTG) award no. NSOE-TSS2019-05.
-  (2016) End to end learning for self-driving cars. arXiv:1604.07316. Cited by: §I, §IV-B, §IV-B.
-  (18 August)(Website) External Links: Cited by: §IV-B, §IV-B.
-  (2015) StaRVOOrS: a tool for combined static and runtime verification of java. In Runtime Verification, pp. 297–305. Cited by: §I.
-  (2012) Multi-column deep neural networks for image classification. In , pp. 3642–3649. Cited by: §I.
-  (2012) Deep neural networks segment neuronal membranes in electron microscopy images. In Advances in neural information processing systems, pp. 2843–2851. Cited by: §I.
-  (2019) Addressing failure prediction by learning model confidence. In Advances in Neural Information Processing Systems, pp. 2902–2913. Cited by: §I, §I, §II-B, §IV-A, §IV-C, §V, §V.
-  (2011) Remarks on some nonparametric estimates of a density function. In Selected Works of Murray Rosenblatt, pp. 95–100. Cited by: §III-A.
-  (2018) Learning confidence for out-of-distribution detection in neural networks. arXiv:1802.04865. Cited by: §V.
-  (2014) Explaining and harnessing adversarial examples. arXiv:1412.6572. Cited by: §V.
-  (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §IV-B.
-  (2013) Connectomic reconstruction of the inner plexiform layer in the mouse retina. Nature 500 (7461), pp. 168–174. Cited by: §I.
-  (2016) A baseline for detecting misclassified and out-of-distribution examples in neural networks. arXiv:1610.02136. Cited by: §I.
-  (2012) Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Signal processing magazine 29 (6), pp. 82–97. Cited by: §I.
-  (2020) Generalized odin: detecting out-of-distribution image without learning from out-of-distribution data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10951–10960. Cited by: §V.
-  (2018) To trust or not to trust a classifier. In Advances in neural information processing systems, pp. 5541–5552. Cited by: §V.
-  (2016) Policy compression for aircraft collision avoidance systems. In Digital Avionics Systems Conference (DASC), pp. 1–10. Cited by: §I.
Shallow-deep networks: understanding and mitigating network overthinking.
International Conference on Machine Learning, pp. 3301–3310. Cited by: §I, §I, §IV-A.
-  (2019) Guiding deep learning system testing using surprise adequacy. In 2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE), pp. 1039–1049. Cited by: §IV-B, §V.
-  (2009) Learning multiple layers of features from tiny images. External Links: Cited by: §I, §IV-B.
-  (2017) Simple and scalable predictive uncertainty estimation using deep ensembles. In Advances in neural information processing systems, pp. 6402–6413. Cited by: §V.
-  (2010) MNIST handwritten digit database. External Links: Cited by: §IV-B.
-  (2018) Training confidence-calibrated classifiers for detecting out-of-distribution samples. In International Conference on Learning Representations, Cited by: §V.
Adversarial adaptive neighborhood with feature importance-aware convex interpolation. IEEE Transactions on Information Forensics and Security. Cited by: §V.
-  (2018) Enhancing the reliability of out-of-distribution image detection in neural networks. In International Conference on Learning Representations, Cited by: §V.
-  (2019) UrbanFM: inferring fine-grained urban flows. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 3132–3142. Cited by: §II-A.
-  (2017) Feedback-based debugging. In 2017 IEEE/ACM 39th International Conference on Software Engineering (ICSE), pp. 393–403. Cited by: §I.
-  (2018) Combinatorial testing for deep learning systems. arXiv:1806.07723. Cited by: §V.
-  (2018) MODE: automated neural network model debugging via state differential analysis and input selection. In Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, pp. 175–186. Cited by: §V.
-  (2016) ModelPlex: verified runtime validation of verified cyber-physical system models. Formal Methods in System Design 49 (1-2), pp. 33–74. Cited by: §I.
-  (2016) DeepFool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2574–2582. Cited by: §V.
-  (2015) Deep neural networks are easily fooled: high confidence predictions for unrecognizable images. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 427–436. Cited by: §V.
-  (2019) Urban traffic prediction from spatio-temporal data using deep meta learning. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 1720–1730. Cited by: §II-A.
-  (2018) Deep k-nearest neighbors: towards confident, interpretable and robust deep learning. arXiv:1803.04765. Cited by: §V.
-  (2017) DeepXplore: automated whitebox testing of deep learning systems. In proceedings of the 26th Symposium on Operating Systems Principles, pp. 1–18. Cited by: §V.
-  (2015) Multivariate density estimation: theory, practice, and visualization. John Wiley & Sons. Cited by: §IV-B.
-  (2017) Grad-CAM: visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision, pp. 618–626. Cited by: §II-A.
-  (1948) A mathematical theory of communication. Bell system technical journal 27 (3), pp. 379–423. Cited by: §I.
-  (2014) Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556. Cited by: §I, §IV-B.
-  (2016) Unsupervised risk estimation using only conditional independence structure. In Advances in Neural Information Processing Systems, pp. 3657–3665. Cited by: §I.
-  (2020) Misbehaviour prediction for autonomous driving systems. In Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering, pp. 359–371. Cited by: §I, §II-B, §IV-A, §IV-B, §IV-C, §V.
-  (2018) Testing deep neural networks. arXiv:1803.04792. Cited by: §II-A.
-  (2014) Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pp. 3104–3112. Cited by: §I.
-  (1992) Variable kernel density estimation. The Annals of Statistics, pp. 1236–1265. Cited by: §III-A, §III.
-  (2018) DeepTest: automated testing of deep-neural-network-driven autonomous cars. In Proceedings of the International Conference on Software Engineering, pp. 303–314. Cited by: §V.
-  (2019) Towards better confidence estimation for neural models. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 7335–7339. Cited by: §I.
-  (2020) Dissector: input validation for deep learning applications by crossing-layer dissection. In 2020 IEEE/ACM 42nd International Conference on Software Engineering (ICSE), pp. 727–738. Cited by: §I, §II-B, §IV-A, §V, footnote 2.
-  (2018) Feature-guided black-box safety testing of deep neural networks. In International Conference on Tools and Algorithms for the Construction and Analysis of Systems, pp. 408–426. Cited by: §V.
-  (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747. Cited by: §IV-B.
-  (2014) Droid-Sec: deep learning in android malware detection. In Proceedings of the 2014 ACM conference on SIGCOMM, pp. 371–372. Cited by: §I.
Towards vision-based deep reinforcement learning for robotic motion control. In Proceedings of the Australasian Conference on Robotics and Automation 2015:, pp. 1–8. Cited by: §I.
-  (2018) DeepRoad: gan-based metamorphic testing and input validation framework for autonomous driving systems. In Proceedings of the International Conference on Automated Software Engineering, pp. 132–142. Cited by: §V.
-  (2020) Towards characterizing adversarial defects of deep learning software from the lens of uncertainty. In 2020 IEEE/ACM 42nd International Conference on Software Engineering (ICSE), pp. 739–751. Cited by: §V.
-  (1999) CRC standard probability and statistics tables and formulae. Crc Press. Cited by: §III-B.