Understanding the decision process of a deep neural network for classification can be challenging due to the very large number of parameters and model’s tendency to represent the information internally in a distributed manner. Distributed representations have significant advantages for the capability of the model to generalize well, however the trade-off is the difficulty in communicating the model’s reasoning. In other words, it is difficult to represent what information was used by the model and it arrived at a particular output. In certain applications such as healthcare, understanding the decision process of a model can be a vital requirement.
One direction towards understanding how Convolutional Neural Networks (CNN) process the information internally is through visualization. The work of Zeiler et. al , Mahendran et. al , Zhou et. al.  etc., have shown that the inner working of the CNN can be projected back to the image space in way that is comprehensible to a human expert. We build on the work of  and present an approach to understand the decision making process of these networks through visualizing the information used as a part of this process as attentive response maps. Our approach is based on a fractionally strided convolutional technique , which we apply to the anatomy classification problem using X-ray images 
. However, rather than examining the model over the whole dataset and trying to understand the sensitivity of the model to the data, we examine the model’s response to individual data points. We also find that existing methods that present different saliency maps of the sensitivity of the model’s output still do not provide a clear representation that can be used to communicate with the experts. We based our approach on visualizing attentive response maps formed through the maximally activated feature maps from the last convolutional layer in an overlayed image. This depiction provides the most informative and effective way to communicate the information from the image used in the decision process of the model. Furthermore, we compare this information to medically relevant landmarks in the images such as anatomical features that an expert would use to identify an organ. In this comparison, we find that shallow models that do not have sufficient capacity fail to use relevant landmarks. Additionally, we find that even deep models that generally perform well on test data do not necessarily use accurate landmarks. Finally, we show that adjusting training and augmentation hyperparameters based on the insight from visualizing attentive response maps leads to models that use medically relevant landmarks while attaining superior performance on test data and give indication of robustness in terms of generalization.
In order to understand the decision making process of deep CNNs and to construct an informed approach to designing models, we build three different deep CNN models with different architectures and hyper-parameters: a shallow CNN, a deeper CNN without data augmentation and a deeper CNN with data augmentation inspired by the work of Razavian et al. . The network architecture for each model is depicted in Fig. 2. After successfully training the above mentioned networks, we examine which part of a particular input image from an anatomy class, particularly the spatially distributed information, is used in the decision process of the CNN. It is done by visualizing attentive response maps from the top most activated units of the last convolutional layer in the above described models, similarly to Bau et. al. . The top units are used to visualize the parts of the input image that the network considers important. The formation of attentive response maps are done by projecting the top unit activations back to image space. The back projection to input space is achieved by using the fractionally strided convolution, also known as the transposed convolution, and sometimes incorrectly termed the deconvolution technique  as shown in Fig. 1. To explain the formulation for the formation of attentive response maps, let us consider a multi-layered neural network with of layers.
Recently Kumar et. al.  proposed the CLEAR method which uses class-based maps. Even though CLEAR is quite effective in explaining CNN predictions, it can’t be applied for scenarios with a large number of classes (). Since our method bears similarity to the CLEAR approach, we use the same notation below as explained in  for better understanding and to highlight the important differences.
To explain the formulation of the attentive response maps, first consider a single layer of a CNN. Let be the deconvolved output response of the single layer with unit weights . The deconvolution output response at layer then can be then obtained by convolving each of the feature maps with unit weights and summing them as:
. Here represents the convolution operation. For notational brevity, we can combine the convolution and summation operation for layer into a single convolution matrix . Hence the above equation can be denoted as: .
For multi-layered CNNs, we can extend the above formulation by adding an additional un-pooling operation as described in . Thus, we can calculate the deconvolved output response from feature space to input space for any layer in a multi-layer network as:
For attentive response maps, we specifically calculate the output responses from individual units of the last conv. layer of a network. Hence, given a network with last layer containing top activated units, we can calculate the attentive response map; (where denotes the response back-projected to the input layer, and thus an array the same size as the input) for any unit () in the last conv layer as:
Here represents the convolution matrix operation in which the unit weights are all zero except that at the th location.
Given the set of individual attentive response maps, we then compute the dominant attentive response map, , by finding the value at each pixel that maximizes the attentive response level, , across all top units:
Next, we then examine the correlation of those regions obtained through visualizing the dominant attentive response maps with identified regions and shapes of image landmarks that are mentioned in the medical radiology literature. With the qualitative assessment, we can establish that the same landmarks that are described in the medical image literature are also used by the CNN. For example, we observe that the particular outlines of bones are used to detect the organ in the image rather than some background information. We use this to guide the decisions for the model architecture and learning algorithm. We can furthermore use this method to detect biases in the models. In certain examples of mis-classification (Fig. 6), we can observe that the information used for making decisions is part of an artifact rather than the object in the image. This understanding can inform us about the possible adjustments to the pre-processing of data augmentation procedures needed to remove the bias from the model.
3 Experiments and Results
To visualize and understand the decision making of a deep neural network, we used anatomy classification from X-ray images as an example use-case. To train our three different convolutional neural networks, radiographs from the ImageClef 2009 -Medical Image Annotation task 111http://www.imageclef.org/2009/medanno were used. This data set consists of a wide range of X-rays images for clinical routine, along with a detailed anatomical classification. For uniform training without any bias, we removed the hierarchical class representation and removed the classes consisting of less than 50 examples. Using this, we ended up with 24 unique classes e.g. foot, knee, hand, cranium, thoracic spine etc., from the full body anatomy.
For training the three networks described in Section 2, we resized the images to 224 224. For evaluation, we divided the ImageClef dataset (14,676) images into randomly selected training and test sets with 90 % and 10 % of the data respectively. For the third (deeper) network specifically, we used various data augmentation techniques ranging from cropping, rotation, translation, shearing, stretching and flipping. We trained the three networks for all the 24 classes simultaneously. The results obtained by training the three models are shown in Table 1.
|Shallow Net||Deeper Net||Deeper Net+data aug|
We visualized the internal activations of the models on test data through attentive response maps. More specifically, we combined the attentive response maps of the top = units from the last convolutional layer and overlayed them on the original image. In this way we constructed the focused attentive response maps that can be easily examined by a human expert. The = was chosen empirically as it produced attentive response maps closer to the anatomical landmarks with least number of units. The results are shown in 3, 4 and 5 for foot and hand classes from ImageClef dataset.
In Fig. 3 we show a correspondence between the obtained attentive response maps and the anatomical landmarks from the medical literature 222http://www.meddean.luc.edu/lumen/meded/radio/curriculum/bones/Strcture_Bone_teach_f.htm. Particularly for the foot image, we can observe that the edges of the metatarsals’ shaft has been used together with the distal phalangies, navicular, cuboid, tibia, and fibula. Similarly for the hand, three of the distal phalanxes, many of the heads of joints, metacarpals’ shafts as well certain carpals. In contrast to this, in Fig. 4 and Fig. 5 we can observe that the shallow and deep network trained without specific data augmentation fails to learn such specific landmarks. These models use broader ranges that are clearly not as specific as the information used in the first model. From the above visual results 333We obtained similar results for the other classes as well, but due to space constraints, only the results of two classes are shown. as well as the performance of the final model we come to the conclusion that sufficiently deep neural network models can be successfully trained to use the same medical landmarks as a human expert while attaining superior performance.
We propose an approach that allows for evaluating the decision making process of CNNs. We show that the design of the model architectures for deep CNN and the training procedure does not necessarily need to be a trial-and-error process, solely focused on optimizing the test set accuracy. Through attentive response map visualization, we managed to incorporate domain knowledge and overall managed to achieve a much more informed decision process, which finally resulted in a model with superior performance. This approach is applicable to many different image analysis applications of deep learning that are unable to easily leverage the potentially large amount of available domain knowledge. Furthermore, visually understanding the information involved in the model decision allows for more confidence in its performance on unseen data.
N. Srivastava, G. E Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov,
“Dropout: a simple way to prevent neural networks from
Journal of Machine Learning Research, vol. 15, no. 1, pp. 1929–1958, 2014.
D Zeiler, M and R. Fergus,
“Visualizing and understanding convolutional networks,”
European Conference on Computer Vision. Springer, 2014, pp. 818–833.
-  A. Mahendran and A. Vedaldi, “Understanding deep image representations by inverting them,” in IEEE CVPR, 2015, pp. 5188–5196.
B. Zhou, A. Khosla, Lapedriza. A., A. Oliva, and A. Torralba,
“Learning Deep Features for Discriminative Localization.,”CVPR, 2016.
-  V. Menkovski, Z. Aleksovski, A. Saalbach, and H. Nickisch, “Can pretrained neural networks detect anatomy?,” arXiv preprint arXiv:1512.05986, 2015.
-  A. S Razavian, J. Sullivan, S. Carlsson, and A. Maki, “Visual instance retrieval with deep convolutional networks,” ITE Transactions on Media Technology and Applications, vol. 4, no. 3, pp. 251–258, 2016.
-  D. Bau, B. Zhou, A. Khosla, A. Oliva, and A. Torralba, “Network dissection: Quantifying interpretability of deep visual representations,” in CVPR, 2017.
-  D. Kumar, A. Wong, and G. W Taylor, “Explaining the unexplained: A class-enhanced attentive response (clear) approach to understanding deep neural networks,” in IEEE CVPR-Workshop, 2017.