Thanks to their excellent performances, Neural Networks (NN) are now used to tackle important problems such as medical diagnosis (Shen et al., 2017) or pedestrian detection for autonomous cars (Tian et al., 2015). However, regarding classification, one of the issues that prevent widespread adoption of such solutions is their overconfidence in their predictions (Amodei et al., 2016). It has been observed that they can provide high confidence values for errors (Guo et al., 2017), out-of-distribution examples such as tailored noise (Nguyen et al., 2015) and adversarial examples (Szegedy et al., 2014).
As explained in (Hendrycks & Gimpel, 2017)
, the main reason for these high confidence values is the softmax layer that is usually used as the last layer of a classification NN. As the softmax is a smooth approximation of the indicator function, it is designed to output high maximum values, even for small differences in the logits, i.e the final activation values of a classification NN before the softmax function is applied. We argue that one additional problem is that by normalizing the logits in order to obtain a probability distribution, we lose the information about their absolute values. In this work, we focus on studying whether this information can be used to compute a confidence value that would allow to detect errors, out-of-distribution data and adversarial examples.
Recently, several works focused on detecting misclassifications. In (Hendrycks & Gimpel, 2017), the authors proposed a baseline method that uses a threshold on softmax probabilities. They also provided some guidelines about how out-of-distribution data detection experiments should be conducted and evaluated. In (Jiang et al., 2018), the Trust Score, a specific 1-NN ratio applied on a filtered version of the training set, is presented in order to recognize misclassified examples. In (Liang et al., 2017), the authors introduced ODIN, a method that separates in and out-of-distribution examples by preprocessing the input using adversarial perturbation and then thresholding the softmax scores computed after temperature scaling.
In this paper, we present Introspection-Net, a simple 3 layers regression NN that takes the logits as input and aims at predicting the confidence value, i.e. whether the classification is correct (output value of 1) or not (output value of 0). We show that, by using adversarial training and data augmentation, we are able to detect misclassifications at a competitive level, as we outperform the Trust Score approach (Jiang et al., 2018) and the Softmax Baseline presented in (Hendrycks & Gimpel, 2017). The main contribution of this paper is to show that logits of already pretrained network provide relevant information to detect adversarial examples and other types of misclassfications.
2 Setup and Analysis
In this Section, we show through experiments on the MNIST dataset (LeCun et al., )
that there are significant differences in logit activations between correctly classified examples and several kinds of misclassifications.
2.1 Baseline NN Architecture and Training
|Layer type||patch size||stride||depth||padding||activation||output size|
For these experiments, we train a simple custom NN using the Keras framework(Chollet et al., 2015). The architecture is described in Table 1
. The training was done for 30 epochs. We used the RMSprop optimizer, a batch size of 64 and data augmentation (i.e. slight rotations, zooms and shifts). After the training, this NN achieves 99.65% accuracy on the test set.
2.2 Generating misclassified examples
In order to study how logits are distributed for different kinds of misclassifications (i.e. errors, out-of-distribution and adversarial examples) compared to original MNIST images, we use or generate the following datasets:
For errors, as the performances of state of the art NNs are close to perfection on MNIST, we use the idea presented in (DeVries & Taylor, 2018). We generate misclassified images by adding 7x7 black patches to test set images. We only keep the ones that are not classified correctly by the NN.
For adversarial examples, we use the CleverHans framework (Papernot et al., 2018). We generate 3 adversarial datasets using different methods: FGSM (Goodfellow et al., 2015), BIM (Chen & Jordan, 2019) and DeepFool (Moosavi-Dezfooli et al., 2016). We only keep the images that are misclassified by the NN.
2.3 Analysis of the difference in logit distributions
Using the NN we described previously, we study whether logits are distributed differently among the different datasets we just presented. Figure 1
displays the distribution of the average logit values for each of them. We can clearly see that logits values are higher for correctly classified MNIST images than for any other dataset. These differences are statistically significant according to the Student’s t-test (between MNIST and any other dataset). Intuitively, it is sound that out-of-distribution examples are associated with lower logit values. In fact, it is likely that previous convolutional layers did not detect the pixel/feature patterns they were trained on, resulting in weakly activated feature maps, which in turn leads to lower logit values. It is however interesting to note that the adversarial datasets are also associated with lower logit values. One explanation might be that in order to maximize the softmax value for a target class, it is easier to decrease logit values for the other classes than to increase the logit value of the adversarial attack target. This intuition is supported by the fact that the distribution of the logit max values is significantly higher for MNIST () than for BIM (), FGSM () and DeepFool (). The same observation can be made about minimum values, showing that adversarial example generation techniques tend to create images that are associated with overall lower logit values.
2.4 Discrimination power of logit vs. softmax values
Figure 2 presents a scatterplot of the logit and softmax minimum and maximum values for the MNIST, Gaussian, CIFAR-10, FGSM and Errors datasets. Qualitatively, it appears that these 2 simple statistics for logits are enough to separate fairly well MNIST examples from the remaining ones. We can see that correctly classified images tend to have higher minimum and maximum logit values, which correlates well with the distributions showed in Figure 1. However, we can see that the discriminating information is lost when we apply the softmax function. These observations confirm that logits, unlike softmax values, provide relevant information for misclassification detection.
3 Confidence Prediction
3.1 Proposed solution: Introspection-Net
|Experiment||OOD dataset||FPR (95% TPR)||AUROC||AUPR In||AUPR Out|
|Softmax Baseline / Trust Score / Proposed solution|
|Custom network MNIST||ERRORS||35.4/39.5/35.8||86.3/86.5/87.8||84.0/82.6/82.9||89.6/88.9/90.5|
Based on the insights revealed in the previous section, we train a simple 3 layers regression NN which we call Introspection-Net since it takes an intermediate layer, the logits, as input. It aims at predicting the confidence value associated to a given prediction, i.e. whether it is a correct (value of 1) or incorrect (value of 0). Introspection-Net is composed of 3 dense layers with 128 neurons and RELU activations. The first 2 layers are followed by dropout layers with a dropout rate of 20%, and the second dropout layer is also followed by a Batch Normalization layer. We train the network for 60 epochs using RMSProp to optimize the mean squared error loss.
3.2 Experimental setup
Experiments: To evaluate our proposition we run the following two experiments. In a first one, we predict the confidence values associated to the predictions made by the NN we described in the previous Section. In a second one, we predict the confidence values associated to the predictions of a Wide Residual Network (Zagoruyko & Komodakis, 2016), with depth 28 and width 8, trained on CIFAR-10. We use the implementation provided by Keras contrib (Chollet et al., 2015). We run this second experiment to ensure that our approach can generalize to several datasets and NN architectures. For both experiments, we compare 3 methods: the Softmax Baseline introduced in (Hendrycks & Gimpel, 2017), the Trust Score (Jiang et al., 2018) and our method.
: the Area Under the Receiver Operating Characteristic curve (AUROC), the Areas Under the Precision-Recall curve (AUPR) for both the correct (AUPR In) and incorrect (AUPR Out) classes. In addition, we also compute the False Positive Rate at 95% True Positive Rate as in(Liang et al., 2017).
Data: For both experiments, the training set is composed of 5000 examples belonging to the in-distribution dataset (i.e. MNIST for experiment 1, and CIFAR-10 for experiment 2), and of 700 examples of all the remaining datasets presented in Section 2.2 (adversarial examples and errors are generated using CIFAR-10 images for experiment 2). These numbers were chosen in order to have a balanced dataset. From this data, we use 10% as a validation set. The testing set is composed of 2000 images of each dataset.
The experimental results are shown in Table 2. We can see that our method provides overall significantly better confidence values than the other methods, regardless of the in-distribution data and of the chosen NN architecture. It is however important to keep in mind that the Trust Score has the advantage to be model-agnostic, unlike our solution.
Regarding the error detection task, all 3 methods obtain fairly similar results. It is the most challenging task for our method, as it achieves its worst performances for both experiments.
Our method performs extremely well on the out-of-distribution data detection task, as we obtain close to perfect scores for all datasets (i.e. Gaussian, Uniform, CIFAR-10/MNIST and Fashion) in both experiments. These results are a significant improvement over the two other methods. For instance, in the first experiment we achieve a 0.0% FPR on the Gaussian dataset while both the Trust Score and the Softmax Baseline obtain a 100.0% FPR.
Our method also achieves overall better results for adversarial example detection. It is especially striking for the BIM dataset, as we achieve a 97.6 AUROC in the second experiment compared to 14.4 for the Softmax Baseline and 56.1 for the Trust Score. However, the Baseline Softmax provide the best results for detecting DeepFool adversarial examples on CIFAR-10. This is due to the fact that softmax values are highly discriminative for these examples, as the average maximum softmax probability is only for DeepFool and for correctly classified MNIST images.
4 Conclusion and Perspectives
We have shown through a series of experiments that logits, unlike softmax probabilities, of already pretrained neural networks provide relevant information to detect 3 types of misclassifications: errors, out-of-distribution data and adversarial examples. We have proposed Introspection-Net, a neural network trained on logit activations to predict whether a prediction is correct or not. This solution outperformed by a large margin the Softmax Baseline and the Trust Score on confidence prediction, without requiring to retrain the original NN.
Our findings highlight the interest of using Introspection, i.e. using NN learned internal representations, to detect misclassifications. These results are especially interesting in the case of adversarial examples detection, since they show that, although the softmax values are ”fooled” by the adversarial noise, it is not the case for internal representations such as logits, even without any additional training procedure.
On the other hand, Inspection-Net does require adversarial training to learn the logit distributions for different types of misclassifications, which is one of its main drawbacks. Consequently, future work involves studying whether one can detect these misclassifications using only the logits of in-distribution examples. We are also interested in exploring whether this method can be used for other applications such as natural language processing.
- Amodei et al. (2016) Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., and Mané, D. Concrete problems in ai safety. arXiv preprint arXiv:1606.06565, 2016.
- Chen & Jordan (2019) Chen, J. and Jordan, M. I. Boundary attack++: Query-efficient decision-based adversarial attack. arXiv preprint arXiv:1904.02144, 2019.
- Chollet et al. (2015) Chollet, F. et al. Keras. https://github.com/fchollet/keras, 2015.
- DeVries & Taylor (2018) DeVries, T. and Taylor, G. W. Learning confidence for out-of-distribution detection in neural networks. arXiv preprint arXiv:1802.04865, 2018.
- Goodfellow et al. (2015) Goodfellow, I., Shlens, J., and Szegedy, C. Explaining and harnessing adversarial examples. In International Conference on Learning Representations, 2015. URL http://arxiv.org/abs/1412.6572.
- Guo et al. (2017) Guo, C., Pleiss, G., Sun, Y., and Weinberger, K. Q. On calibration of modern neural networks. In Proceedings of the 34th International Conference on Machine Learning, pp. 1321–1330. JMLR. org, 2017.
- Hendrycks & Gimpel (2017) Hendrycks, D. and Gimpel, K. A baseline for detecting misclassified and out-of-distribution examples in neural networks. In International Conference on Learning Representations, 2017.
- Jiang et al. (2018) Jiang, H., Kim, B., Guan, M., and Gupta, M. To trust or not to trust a classifier. In Advances in Neural Information Processing Systems, pp. 5541–5552, 2018.
- Krizhevsky et al. (2014) Krizhevsky, A., Nair, V., and Hinton, G. The cifar-10 dataset. online: http://www. cs. toronto. edu/kriz/cifar. html, 55, 2014.
- (10) LeCun, Y., Cortes, C., and Burges, C. The mnist dataset of handwritten digits. 1998.
- Liang et al. (2017) Liang, S., Li, Y., and Srikant, R. Enhancing the reliability of out-of-distribution image detection in neural networks. In International Conference on Learning Representations, 2017.
Moosavi-Dezfooli et al. (2016)
Moosavi-Dezfooli, S.-M., Fawzi, A., and Frossard, P.
Deepfool: a simple and accurate method to fool deep neural networks.
Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2574–2582, 2016.
- Nguyen et al. (2015) Nguyen, A., Yosinski, J., and Clune, J. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 427–436, 2015.
- Papernot et al. (2018) Papernot, N., Faghri, F., Carlini, N., Goodfellow, I., Feinman, R., Kurakin, A., Xie, C., Sharma, Y., Brown, T., Roy, A., Matyasko, A., Behzadan, V., Hambardzumyan, K., Zhang, Z., Juang, Y.-L., Li, Z., Sheatsley, R., Garg, A., Uesato, J., Gierke, W., Dong, Y., Berthelot, D., Hendricks, P., Rauber, J., and Long, R. Technical report on the cleverhans v2.1.0 adversarial examples library. arXiv preprint arXiv:1610.00768, 2018.
- Shen et al. (2017) Shen, D., Wu, G., and Suk, H.-I. Deep learning in medical image analysis. Annual review of biomedical engineering, 19:221–248, 2017.
- Szegedy et al. (2014) Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., and Fergus, R. Intriguing properties of neural networks. In International Conference on Learning Representations, 2014. URL http://arxiv.org/abs/1312.6199.
- Tian et al. (2015) Tian, Y., Luo, P., Wang, X., and Tang, X. Deep learning strong parts for pedestrian detection. In Proceedings of the IEEE international conference on computer vision, pp. 1904–1912, 2015.
- Xiao et al. (2017) Xiao, H., Rasul, K., and Vollgraf, R. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747, 2017.
- Zagoruyko & Komodakis (2016) Zagoruyko, S. and Komodakis, N. Wide residual networks. In Richard C. Wilson, E. R. H. and Smith, W. A. P. (eds.), Proceedings of the British Machine Vision Conference (BMVC), pp. 87.1–87.12. BMVA Press, September 2016. ISBN 1-901725-59-6. doi: 10.5244/C.30.87. URL https://dx.doi.org/10.5244/C.30.87.