In the field of image classification, deep learning networks have surpassed the top-5 human error rate
on ImageNet dataset. In this dataset, neural networks learn to differentiate between classes of natural images. The success of deep learning networks on natural images has fostered its usage on computed visual data including biomedical  and seismic [1, 20] fields. While the number of learn-able classes in these fields is generally limited, neural networks have the additional task of aiding domain specific experts to interpret the explanations behind their decisions to promote trust in the network. For instance, in the field of biomedical imaging, a medical practitioner diagnoses whether a patient is COVID positive or negative based on CT scans . The authors in 
use transfer learning approaches on CT scans to perform the detection and provide explanatory results using Grad-CAM to justify their network’s efficacy in detecting COVID-19. Grad-CAM highlights features in the image that lead to the network’s decision. In this paper, we analyze a neural network’s causal capability using existing explanatory methods by providing a technique to extract causal features from such explanatory methods.
Probabilistic causation assumes a causal relationship between two events and if event
increases the probability of occurrence of. In image classification networks, refers to decisions made by neural networks based on features . A popular method for ascertaining probabilistic causality is through interventions in data . In these models, the set of causal features is varied by intervening in the generation process of to ascertain the change in the observed decision . Such interventions can however be long, complex, unethical or impossible  like in COVID CT scans. Hence, we forego interventionist causality and rely on observed causality to derive causation. Observed causality relies on passive observation to determine statistical causality. The authors in  propose that non-interventionist observation provides two sets of features - causal , and context - that lead to decision . In other words, a decision is made based on both causal and context features in an image. Hence, existing explanatory methods including [19, 2, 21] highlight features. However, they do not provide a methodology to extract either or separately. In this paper, we utilize contrastive features from  to approximate features. We then propose a set-theoretic approach to abstract out of Grad-CAM’s features. The contributions of this paper include:
Formulating a set theoretic interpretation of causal and context features for observed causality in visual data.
Expressing context features as contrastive features.
Providing an evaluation setup that tests for causality in a limited label scenario.
2 Background and Related Works
Causal and Context features: The authors in  define causal features as visual features that exist within the physical body of the object in an image and context features as visual features that surround the object in the image. In this paper, we forego the definitions based on physical locations in favor of the feature’s membership towards predicting a class . We define causal features as those features whose presence increases the likelihood of occurrence of decision in any CT scan . Conversely, the absence of causal features decreases the probability of decision . The above two definitions of causal features are derived from the Common Cause Principles  and are used in  to evaluate causality. We follow a slightly altered methodology to showcase the causal effectiveness of our method. We define context features contrastively, as features that allow differentiating predicted class and a contrast class , without necessarily causing .
Context and Contrast features: In the field of human visual saliency, the authors in  provide an argument for the existence of contextual features of a class that are represented by their relationship with features of other classes. In , the implicit saliency of a neural network is extracted as an expectancy-mismatch between the predicted class against all learned classes thereby empirically validating the existence of contrastive information within neural networks. The authors in  extract this information and visualize them as explanations. In this paper, we represent the context features as contrast features.
Grad-CAM and Contrastive Explanations: Consider a trained binary classifier . Given an input image ,
are the logit outputs of dimensions. The predicted class of image is the index of the maximum element in i.e. . Grad-CAM localizes all features in that leads to a decision
by backpropagating the logitto the last convolutional layer . The per-channel gradients in layer are summed up to obtain an importance score for a channel and multiplied with the activations in their respective channels . The importance score weighted activation maps are averaged to obtain the Grad-CAM mask for class . The authors in 
modified the Grad-CAM framework to backpropagate a loss functionbetween predicted class and a contrast class . With the other steps remaining the same, a contrast-importance score weighted contrast mask is given by for predicted and contrast classes and . Note that gradients are used as features in multiple works including [10, 11, 12].
3 Proposed Method
We first motivate our method based on set theory before describing the process of extraction of causal features.
Consider the setting as described in Section 2 where a binary classification network is trained on COVID-19 CT scans . Once trained, for any given scan from the dataset, Grad-CAM provides visual features that combine both causal and context features. Hence, Grad-CAM provides a mask, for the prediction on a given scan . If the network classifies correctly with confidence, then has resolved causal and context features independently such that . However, this rarely occurs in practice and we assume . Hence, our goal is to extract the relative complement given . This is illustrated in Fig. 1. Based on a visual inspection of the venn diagram, we can rewrite as,
Note that we do not have access to either or . We are only provided with . In this paper, we estimate the context features using contrastive features from . Specifically, continuing the notations from Section 2, we represent as,
A venn diagram visualization is presented in Fig. 1. We qualitatively explain all the contrastive terms.
3.2 Contrastive features
: Highlights features that answer ‘Why P or Q?’. This term contrastively leads to either decisions of or . In the binary setting, we approximate this to be all possible features . Borrowing notations from Section 2, is obtained by backpropagating a loss to obtain a contrast-importance score .
: Highlights features that answer ‘Why neither P nor Q?’. The features in this term do not increase the probability of either or . is obtained by backpropagating a loss to obtain a contrast-importance score .
: Highlights features that answer ‘Why not P with 100% confidence?’. Hence, it highlights all unresolved causal features. is obtained by backpropagating a loss to obtain a contrast-importance score
where represents the importance score from Grad-CAM and represents the normalized importance score from contrast maps. The overall negative sign occurs because are gradients whose directions are opposite to the feature minima. The final map is normalized and is visualized. A representative COVID negative scan and its Grad-CAM  and Grad-CAM++  explanations are shown in Figs. 2c, 2d, and 2e respectively. The causal map from Eq. 4 and contrastive maps and are visualized in Figs. 2f, 2g and 2h respectively. Note that while appears similar to , is biased by normalization and its values are lesser.
Effect of number of classes: In a binary classification setting, we need four feature maps - one Grad-CAM and three contrastive maps to extract causal features. These are obtained by backpropagating and the logit for Grad-CAM. Hence, we backpropagate the power set of all possible class combinations. This translates to backpropagations for classes. Therefore, this technique is suitable for a limited class scenario.
In this section, we detail the experiments to validate the causal nature of our proposed features. We perform two sets of experiments to validate within-network and inter-network causality. The COVID-19 dataset  consists of 349 COVID positive CT scans and 463 COVID negative CT scans. We train ResNets-18,34,50  and DenseNets-121,169  as described in .
4.1 Within-network causality : Deletion and Insertion
The authors in  propose two causal metrics - deletion and insertion. In deletion, the identified causes are deleted pixel by pixel and the probability of predicted class, as a function of the fraction of the removed pixels, is monitored. In insertion, the non-causal pixels are added and the increase in probability as a function of added fraction of pixels is noted. However, in a binary setting, the probability for a class rarely decreases to a large extent even after removing a majority of the pixels. Hence, we modify the deletion and insertion setup to measure accuracy instead of probability on masked images.
A threshold is applied on the Grad-CAM , Grad-CAM++  and proposed causal maps. For deletion, the pixels greater than the given threshold are made equal to with the rest being . And vice-versa for insertion. The binary mask is then multiplied with the original input image and the masked image is passed through the model. The model’s prediction is noted. This is conducted on all images in the testing set and the average test accuracy is calculated. The experiment is conducted with thresholds ranging from to with an increment of . The average accuracy in each case is noted. From Figs. 2f and 3, we see that the area highlighted by the proposed causal features is lesser than the compared methods. To objectively measure this area, we encode the original and masked images using Huffman coding  as and respectively. The ratio of the bits is taken as . Each average accuracy for a threshold is now associated with an . All accuracies are plotted as a function of their and depicted in Fig. 2a. Consider two points and on the proposed causal and Grad-CAM curves respectively. These points depict roughly averaged accuracy. From the corresponding bit rates in the x-axis, the causal features achieve this accuracy at a lower bit rate compared to Grad-CAM. Hence, dense causal features are encoded by lesser bits in the proposed method. This is validated in the insertion plot in Fig. 2b as well.
|Threshold||Huffman ()||Accuracies ()|
|Threshold||Huffman ()||Accuracies ()|
4.2 Inter-network causality : Transference of features
In this section, we mask input images based on features obtained from the proposed causal and Grad-CAM methods using ResNet-18 . We then pass these masked images through other trained networks including ResNets-34,50  and DenseNets-121,169 . This experiment is designed to validate the transfer-ability of causal features identified by ResNet-18 to other networks. The accuracy and Huffman ratio results for different thresholds are shown in Table 1. It can be seen that the huffman ratio for the proposed method is lesser than Grad-CAM for all thresholds. Hence, it is able to identify dense causal features from Grad-CAM. The averaged accuracy of masked images is also shown for other networks. In of the categories, the proposed causal feature masked images outperform Grad-CAM feature masked images with a lesser huffman ratio. In Table 2, we extract masks using ResNet-34, perform deletion based on shown thresholds and obtain huffman ratios for all test images. These masked images are then passed into the corresponding networks and the accuracy results are shown. In of the categories, the proposed causal features outperform Grad-CAM features.
4.3 Qualitative Analysis
The authors in  argue that humans must be kept out of the loop when evaluating causality. However, by definition, explanations are rationales used by networks to justify their decisions . These justifications are made for the benefit of humans. Such justifications are required in fields like biomedical imaging where deep learning tools are used as aids by medical practitioners. We visualize Grad-CAM and their underlying causal features from the proposed technique in Fig. 3. Both original scans are from COVID positive patients. In Fig. 3a, Grad-CAM fails to highlight the circled red region that depicts COVID. More importantly, the extracted causal features are at the bottom right. Feeding the masked image into ResNet-18, the network classifies both correctly but with a higher confidence in the causal features. In Fig. 3b, we pick a scan whose Grad-CAM and causal features were classified with the same confidence but from different regions within the scans.
These results suggest that it is the context features that add human interpretability and causal features that aid classification. In real-world biomedical applications like in the considered COVID-19 detection, it is imperative to identify and make decisions based on causal features. It merits further study into designing better networks whose causal features are more human interpretable, similar to Grad-CAM’s causal and context feature set.
In this paper we formalize the causal and context features that a neural network bases its decision on. We express context features in terms of contrastive features between classes that the neural network has implicitly learned. This allows separation between causal and context features. Grad-CAM is used as the explanatory mechanism from which causal features are extracted. We validate and establish the trasfer-abilty of these causal features across networks. The visualizations suggest that the causal regions that a neural network bases its decision on is not always human interpretable. This calls for more work in designing human-interpretable causal features especially in fields like biomedical imaging.
A machine-learning benchmark for facies classification. Interpretation 7 (3), pp. SE175–SE187. Cited by: §1.
Grad-cam++: generalized gradient-based visual explanations for deep convolutional networks.
2018 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 839–847. Cited by: §1, §3.3, §4.1.
Deep residual learning for image recognition.
Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §1, §4.2, §4.
-  (2020) Sample-efficient deep learning for covid-19 diagnosis based on ct scans. medRxiv. Cited by: §1, §4.
-  (1997) Probabilistic causation. Cited by: §1.
-  (1999) On reichenbach’s common cause principle and reichenbach’s notion of common cause. The British Journal for the Philosophy of Science 50 (3), pp. 377–399. Cited by: §2.
-  (2017) Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4700–4708. Cited by: §4.2, §4.
-  (1952) A method for the construction of minimum-redundancy codes. Proceedings of the IRE 40 (9), pp. 1098–1101. Cited by: §4.1.
-  (1962) Scientific explanation. Vol. 13, U of Minnesota Press. Cited by: §4.3.
-  (2019) Distorted representation space characterization through backpropagated gradients. In 2019 IEEE International Conference on Image Processing (ICIP), pp. 2651–2655. Cited by: §2.
Backpropagated gradient representations for anomaly detection. In European Conference on Computer Vision, pp. 206–226. Cited by: §2.
-  (2020) Gradients as a measure of uncertainty in neural networks. In 2020 IEEE International Conference on Image Processing (ICIP), pp. 2416–2420. Cited by: §2.
-  (2017) Discovering causal signals in images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6979–6987. Cited by: §1, §2.
-  (2007) The role of context in object recognition. Trends in cognitive sciences 11 (12), pp. 520–527. Cited by: §2.
-  (2000) Models, reasoning and inference. Cambridge, UK: CambridgeUniversityPress. Cited by: §1.
-  (2018) Rise: randomized input sampling for explanation of black-box models. arXiv preprint arXiv:1806.07421. Cited by: §2, §4.1, §4.3.
-  (2020) Contrastive explanations in neural networks. In 2020 IEEE International Conference on Image Processing (ICIP), pp. 3289–3293. Cited by: §1, §2, §2, §3.1.
-  (2015) Imagenet large scale visual recognition challenge. International journal of computer vision 115 (3), pp. 211–252. Cited by: §1.
-  (2017) Grad-cam: visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision, pp. 618–626. Cited by: §1, §1, §3.3, §4.1.
-  (2018) Towards understanding common features between natural and seismic images. In SEG Technical Program Expanded Abstracts 2018, pp. 2076–2080. Cited by: §1.
-  (2014) Striving for simplicity: the all convolutional net. arXiv preprint arXiv:1412.6806. Cited by: §1.
-  (2003) Inferring causal networks from observations and interventions. Cognitive science 27 (3), pp. 453–489. Cited by: §1.
-  (2020) Implicit saliency in deep neural networks. In 2020 IEEE International Conference on Image Processing (ICIP), pp. 2915–2919. Cited by: §2.
-  (2019) Relative afferent pupillary defect screening through transfer learning. IEEE Journal of Biomedical and Health Informatics 24 (3), pp. 788–795. Cited by: §1.
-  (2020) COVID-ct-dataset: a ct scan dataset about covid-19. arXiv preprint arXiv:2003.13865. Cited by: §1, §3.1, §4.