Extracting Causal Visual Features for Limited label Classification

03/23/2021 ∙ by Mohit Prabhushankar, et al. ∙ Georgia Institute of Technology 21

Neural networks trained to classify images do so by identifying features that allow them to distinguish between classes. These sets of features are either causal or context dependent. Grad-CAM is a popular method of visualizing both sets of features. In this paper, we formalize this feature divide and provide a methodology to extract causal features from Grad-CAM. We do so by defining context features as those features that allow contrast between predicted class and any contrast class. We then apply a set theoretic approach to separate causal from contrast features for COVID-19 CT scans. We show that on average, the image regions with the proposed causal features require 15 encoded using Huffman encoding, compared to Grad-CAM, for an average increase of 3 transfer-ability of causal features between networks and comment on the non-human interpretable causal nature of current networks.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 5

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In the field of image classification, deep learning networks have surpassed the top-5 human error rate 

[3]

on ImageNet dataset 

[18]. In this dataset, neural networks learn to differentiate between classes of natural images. The success of deep learning networks on natural images has fostered its usage on computed visual data including biomedical [24] and seismic [1, 20] fields. While the number of learn-able classes in these fields is generally limited, neural networks have the additional task of aiding domain specific experts to interpret the explanations behind their decisions to promote trust in the network. For instance, in the field of biomedical imaging, a medical practitioner diagnoses whether a patient is COVID positive or negative based on CT scans [25]. The authors in [4]

use transfer learning approaches on CT scans to perform the detection and provide explanatory results using Grad-CAM 

[19] to justify their network’s efficacy in detecting COVID-19. Grad-CAM highlights features in the image that lead to the network’s decision. In this paper, we analyze a neural network’s causal capability using existing explanatory methods by providing a technique to extract causal features from such explanatory methods.

Probabilistic causation assumes a causal relationship between two events and if event

increases the probability of occurrence of

 [5]. In image classification networks, refers to decisions made by neural networks based on features . A popular method for ascertaining probabilistic causality is through interventions in data [15]. In these models, the set of causal features is varied by intervening in the generation process of to ascertain the change in the observed decision . Such interventions can however be long, complex, unethical or impossible [22] like in COVID CT scans. Hence, we forego interventionist causality and rely on observed causality to derive causation. Observed causality relies on passive observation to determine statistical causality. The authors in [13] propose that non-interventionist observation provides two sets of features - causal , and context - that lead to decision . In other words, a decision is made based on both causal and context features in an image. Hence, existing explanatory methods including [19, 2, 21] highlight features. However, they do not provide a methodology to extract either or separately. In this paper, we utilize contrastive features from [17] to approximate features. We then propose a set-theoretic approach to abstract out of Grad-CAM’s features. The contributions of this paper include:

  • Formulating a set theoretic interpretation of causal and context features for observed causality in visual data.

  • Expressing context features as contrastive features.

  • Providing an evaluation setup that tests for causality in a limited label scenario.

In Section 2, we motivate context via contrast and review Grad-CAM and contrastive explanations. We then motivate the proposed method and detail its procedure in Section 3. We finally present the results in Section 4 before concluding in Section 5.

Figure 1: Top : Venn diagram for problem formulation based on Eq. 1

. Bottom : Estimating context features from Eq. 

2. Note that because the network does not classify with confidence, we cannot resolve .

2 Background and Related Works

Causal and Context features: The authors in [13] define causal features as visual features that exist within the physical body of the object in an image and context features as visual features that surround the object in the image. In this paper, we forego the definitions based on physical locations in favor of the feature’s membership towards predicting a class . We define causal features as those features whose presence increases the likelihood of occurrence of decision in any CT scan . Conversely, the absence of causal features decreases the probability of decision . The above two definitions of causal features are derived from the Common Cause Principles [6] and are used in [16] to evaluate causality. We follow a slightly altered methodology to showcase the causal effectiveness of our method. We define context features contrastively, as features that allow differentiating predicted class and a contrast class , without necessarily causing .

Context and Contrast features: In the field of human visual saliency, the authors in [14] provide an argument for the existence of contextual features of a class that are represented by their relationship with features of other classes. In [23], the implicit saliency of a neural network is extracted as an expectancy-mismatch between the predicted class against all learned classes thereby empirically validating the existence of contrastive information within neural networks. The authors in [17] extract this information and visualize them as explanations. In this paper, we represent the context features as contrast features.

Grad-CAM and Contrastive Explanations: Consider a trained binary classifier . Given an input image ,

are the logit outputs of dimensions

. The predicted class of image is the index of the maximum element in i.e. . Grad-CAM localizes all features in that leads to a decision

by backpropagating the logit

to the last convolutional layer . The per-channel gradients in layer are summed up to obtain an importance score for a channel and multiplied with the activations in their respective channels . The importance score weighted activation maps are averaged to obtain the Grad-CAM mask for class . The authors in [17]

modified the Grad-CAM framework to backpropagate a loss function

between predicted class and a contrast class . With the other steps remaining the same, a contrast-importance score weighted contrast mask is given by for predicted and contrast classes and . Note that gradients are used as features in multiple works including [10, 11, 12].

Figure 2: (a) Deletion - Curves to the left are ideal. (b) Insertion - Curves to the right are ideal. (c) Original scan. (d) Grad-CAM. (e) Grad-CAM++. (f) Proposed causal explanation. (g) . (h)

3 Proposed Method

We first motivate our method based on set theory before describing the process of extraction of causal features.

3.1 Theory

Consider the setting as described in Section 2 where a binary classification network is trained on COVID-19 CT scans [25]. Once trained, for any given scan from the dataset, Grad-CAM provides visual features that combine both causal and context features. Hence, Grad-CAM provides a mask, for the prediction on a given scan . If the network classifies correctly with confidence, then has resolved causal and context features independently such that . However, this rarely occurs in practice and we assume . Hence, our goal is to extract the relative complement given . This is illustrated in Fig. 1. Based on a visual inspection of the venn diagram, we can rewrite as,

(1)

Note that we do not have access to either or . We are only provided with . In this paper, we estimate the context features using contrastive features from [17]. Specifically, continuing the notations from Section 2, we represent as,

(2)

Substituting Eq. 2 back in Eq. 1, we obtain our final formulation,

(3)

A venn diagram visualization is presented in Fig. 1. We qualitatively explain all the contrastive terms.

3.2 Contrastive features

: Highlights features that answer ‘Why P or Q?’. This term contrastively leads to either decisions of or . In the binary setting, we approximate this to be all possible features . Borrowing notations from Section 2, is obtained by backpropagating a loss to obtain a contrast-importance score .

: Highlights features that answer ‘Why neither P nor Q?’. The features in this term do not increase the probability of either or . is obtained by backpropagating a loss to obtain a contrast-importance score .

: Highlights features that answer ‘Why not P with 100% confidence?’. Hence, it highlights all unresolved causal features. is obtained by backpropagating a loss to obtain a contrast-importance score

3.3 Implementation

Continuing the notations from Section 2, the implementation equivalent of Eq. 3 is given by,

(4)

where represents the importance score from Grad-CAM and represents the normalized importance score from contrast maps. The overall negative sign occurs because are gradients whose directions are opposite to the feature minima. The final map is normalized and is visualized. A representative COVID negative scan and its Grad-CAM [19] and Grad-CAM++ [2] explanations are shown in Figs. 2c,  2d, and 2e respectively. The causal map from Eq. 4 and contrastive maps and are visualized in Figs. 2f,  2g and 2h respectively. Note that while appears similar to , is biased by normalization and its values are lesser.

Effect of number of classes: In a binary classification setting, we need four feature maps - one Grad-CAM and three contrastive maps to extract causal features. These are obtained by backpropagating and the logit for Grad-CAM. Hence, we backpropagate the power set of all possible class combinations. This translates to backpropagations for classes. Therefore, this technique is suitable for a limited class scenario.

4 Experiments

In this section, we detail the experiments to validate the causal nature of our proposed features. We perform two sets of experiments to validate within-network and inter-network causality. The COVID-19 dataset [25] consists of 349 COVID positive CT scans and 463 COVID negative CT scans. We train ResNets-18,34,50 [3] and DenseNets-121,169 [7] as described in [4].

4.1 Within-network causality : Deletion and Insertion

The authors in [16] propose two causal metrics - deletion and insertion. In deletion, the identified causes are deleted pixel by pixel and the probability of predicted class, as a function of the fraction of the removed pixels, is monitored. In insertion, the non-causal pixels are added and the increase in probability as a function of added fraction of pixels is noted. However, in a binary setting, the probability for a class rarely decreases to a large extent even after removing a majority of the pixels. Hence, we modify the deletion and insertion setup to measure accuracy instead of probability on masked images.

A threshold is applied on the Grad-CAM [19], Grad-CAM++ [2] and proposed causal maps. For deletion, the pixels greater than the given threshold are made equal to with the rest being . And vice-versa for insertion. The binary mask is then multiplied with the original input image and the masked image is passed through the model. The model’s prediction is noted. This is conducted on all images in the testing set and the average test accuracy is calculated. The experiment is conducted with thresholds ranging from to with an increment of . The average accuracy in each case is noted. From Figs. 2f and 3, we see that the area highlighted by the proposed causal features is lesser than the compared methods. To objectively measure this area, we encode the original and masked images using Huffman coding [8] as and respectively. The ratio of the bits is taken as . Each average accuracy for a threshold is now associated with an . All accuracies are plotted as a function of their and depicted in Fig. 2a. Consider two points and on the proposed causal and Grad-CAM curves respectively. These points depict roughly averaged accuracy. From the corresponding bit rates in the x-axis, the causal features achieve this accuracy at a lower bit rate compared to Grad-CAM. Hence, dense causal features are encoded by lesser bits in the proposed method. This is validated in the insertion plot in Fig. 2b as well.

Threshold Huffman () Accuracies ()
ResNet-34 ResNet-50 DenseNet-121 DenseNet-169
GradCAM Causal GradCAM Causal GradCAM Causal GradCAM Causal GradCAM Causal
0.1 0.7802 0.5456 0.6158 0.6502 0.7586 0.7537 0.6404 0.6453 0.7044 0.7291
0.2 0.6442 0.4549 0.5911 0.6355 0.7734 0.7783 0.6158 0.6256 0.7143 0.7685
0.3 0.5329 0.3879 0.5665 0.5764 0.7241 0.7980 0.6108 0.6207 0.6946 0.7389
0.4 0.4434 0.3329 0.5074 0.5419 0.67 0.7882 0.5911 0.5961 0.6305 0.7192
0.5 0.3715 0.2886 0.5025 0.5222 0.601 0.7586 0.5911 0.6108 0.6059 0.6847
Table 1: Causal Feature Transference from ResNet-18 to other architectures.
Threshold Huffman () Accuracies ()
ResNet-18 ResNet-50 DenseNet-121 DenseNet-169
GradCAM Causal GradCAM Causal GradCAM Causal GradCAM Causal GradCAM Causal
0.1 0.8352 0.6531 0.7094 0.7044 0.7783 0.7241 0.6108 0.6552 0.7389 0.7586
0.2 0.7493 0.5646 0.7044 0.6995 0.7931 0.7586 0.6059 0.6256 0.7537 0.7586
0.3 0.6584 0.4781 0.6749 0.6749 0.8177 0.7537 0.6059 0.6059 0.7389 0.7340
0.4 0.5672 0.3983 0.6502 0.6650 0.7635 0.7685 0.6059 0.5911 0.7192 0.7044
0.5 0.4749 0.3292 0.6010 0.6059 0.7783 0.7537 0.5764 0.5616 0.6897 0.6552
Table 2: Causal Feature Transference from ResNet-34 to other architectures.

4.2 Inter-network causality : Transference of features

In this section, we mask input images based on features obtained from the proposed causal and Grad-CAM methods using ResNet-18 [3]. We then pass these masked images through other trained networks including ResNets-34,50 [3] and DenseNets-121,169 [7]. This experiment is designed to validate the transfer-ability of causal features identified by ResNet-18 to other networks. The accuracy and Huffman ratio results for different thresholds are shown in Table 1. It can be seen that the huffman ratio for the proposed method is lesser than Grad-CAM for all thresholds. Hence, it is able to identify dense causal features from Grad-CAM. The averaged accuracy of masked images is also shown for other networks. In of the categories, the proposed causal feature masked images outperform Grad-CAM feature masked images with a lesser huffman ratio. In Table 2, we extract masks using ResNet-34, perform deletion based on shown thresholds and obtain huffman ratios for all test images. These masked images are then passed into the corresponding networks and the accuracy results are shown. In of the categories, the proposed causal features outperform Grad-CAM features.

Figure 3: (a) Non human-interpretable causal feature has higher prediction confidence. (b) The prediction confidences from both explanations are equal.

4.3 Qualitative Analysis

The authors in [16] argue that humans must be kept out of the loop when evaluating causality. However, by definition, explanations are rationales used by networks to justify their decisions [9]. These justifications are made for the benefit of humans. Such justifications are required in fields like biomedical imaging where deep learning tools are used as aids by medical practitioners. We visualize Grad-CAM and their underlying causal features from the proposed technique in Fig. 3. Both original scans are from COVID positive patients. In Fig. 3a, Grad-CAM fails to highlight the circled red region that depicts COVID. More importantly, the extracted causal features are at the bottom right. Feeding the masked image into ResNet-18, the network classifies both correctly but with a higher confidence in the causal features. In Fig. 3b, we pick a scan whose Grad-CAM and causal features were classified with the same confidence but from different regions within the scans.

These results suggest that it is the context features that add human interpretability and causal features that aid classification. In real-world biomedical applications like in the considered COVID-19 detection, it is imperative to identify and make decisions based on causal features. It merits further study into designing better networks whose causal features are more human interpretable, similar to Grad-CAM’s causal and context feature set.

5 Conclusion

In this paper we formalize the causal and context features that a neural network bases its decision on. We express context features in terms of contrastive features between classes that the neural network has implicitly learned. This allows separation between causal and context features. Grad-CAM is used as the explanatory mechanism from which causal features are extracted. We validate and establish the trasfer-abilty of these causal features across networks. The visualizations suggest that the causal regions that a neural network bases its decision on is not always human interpretable. This calls for more work in designing human-interpretable causal features especially in fields like biomedical imaging.

References

  • [1] Y. Alaudah, P. Michałowicz, M. Alfarraj, and G. AlRegib (2019)

    A machine-learning benchmark for facies classification

    .
    Interpretation 7 (3), pp. SE175–SE187. Cited by: §1.
  • [2] A. Chattopadhay, A. Sarkar, P. Howlader, and V. N. Balasubramanian (2018) Grad-cam++: generalized gradient-based visual explanations for deep convolutional networks. In

    2018 IEEE Winter Conference on Applications of Computer Vision (WACV)

    ,
    pp. 839–847. Cited by: §1, §3.3, §4.1.
  • [3] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In

    Proceedings of the IEEE conference on computer vision and pattern recognition

    ,
    pp. 770–778. Cited by: §1, §4.2, §4.
  • [4] X. He, X. Yang, S. Zhang, J. Zhao, Y. Zhang, E. Xing, and P. Xie (2020) Sample-efficient deep learning for covid-19 diagnosis based on ct scans. medRxiv. Cited by: §1, §4.
  • [5] C. Hitchcock (1997) Probabilistic causation. Cited by: §1.
  • [6] G. Hofer-Szabó, M. Rédei, and L. E. Szabó (1999) On reichenbach’s common cause principle and reichenbach’s notion of common cause. The British Journal for the Philosophy of Science 50 (3), pp. 377–399. Cited by: §2.
  • [7] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger (2017) Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4700–4708. Cited by: §4.2, §4.
  • [8] D. A. Huffman (1952) A method for the construction of minimum-redundancy codes. Proceedings of the IRE 40 (9), pp. 1098–1101. Cited by: §4.1.
  • [9] P. Kitcher and W. C. Salmon (1962) Scientific explanation. Vol. 13, U of Minnesota Press. Cited by: §4.3.
  • [10] G. Kwon, M. Prabhushankar, D. Temel, and G. AlRegib (2019) Distorted representation space characterization through backpropagated gradients. In 2019 IEEE International Conference on Image Processing (ICIP), pp. 2651–2655. Cited by: §2.
  • [11] G. Kwon, M. Prabhushankar, D. Temel, and G. AlRegib (2020)

    Backpropagated gradient representations for anomaly detection

    .
    In European Conference on Computer Vision, pp. 206–226. Cited by: §2.
  • [12] J. Lee and G. AlRegib (2020) Gradients as a measure of uncertainty in neural networks. In 2020 IEEE International Conference on Image Processing (ICIP), pp. 2416–2420. Cited by: §2.
  • [13] D. Lopez-Paz, R. Nishihara, S. Chintala, B. Scholkopf, and L. Bottou (2017) Discovering causal signals in images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6979–6987. Cited by: §1, §2.
  • [14] A. Oliva and A. Torralba (2007) The role of context in object recognition. Trends in cognitive sciences 11 (12), pp. 520–527. Cited by: §2.
  • [15] J. Pearl et al. (2000) Models, reasoning and inference. Cambridge, UK: CambridgeUniversityPress. Cited by: §1.
  • [16] V. Petsiuk, A. Das, and K. Saenko (2018) Rise: randomized input sampling for explanation of black-box models. arXiv preprint arXiv:1806.07421. Cited by: §2, §4.1, §4.3.
  • [17] M. Prabhushankar, G. Kwon, D. Temel, and G. AlRegib (2020) Contrastive explanations in neural networks. In 2020 IEEE International Conference on Image Processing (ICIP), pp. 3289–3293. Cited by: §1, §2, §2, §3.1.
  • [18] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. (2015) Imagenet large scale visual recognition challenge. International journal of computer vision 115 (3), pp. 211–252. Cited by: §1.
  • [19] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra (2017) Grad-cam: visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision, pp. 618–626. Cited by: §1, §1, §3.3, §4.1.
  • [20] M. A. Shafiq, M. Prabhushankar, H. Di, and G. AlRegib (2018) Towards understanding common features between natural and seismic images. In SEG Technical Program Expanded Abstracts 2018, pp. 2076–2080. Cited by: §1.
  • [21] J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller (2014) Striving for simplicity: the all convolutional net. arXiv preprint arXiv:1412.6806. Cited by: §1.
  • [22] M. Steyvers, J. B. Tenenbaum, E. Wagenmakers, and B. Blum (2003) Inferring causal networks from observations and interventions. Cognitive science 27 (3), pp. 453–489. Cited by: §1.
  • [23] Y. Sun, M. Prabhushankar, and G. AlRegib (2020) Implicit saliency in deep neural networks. In 2020 IEEE International Conference on Image Processing (ICIP), pp. 2915–2919. Cited by: §2.
  • [24] D. Temel, M. J. Mathew, G. AlRegib, and Y. M. Khalifa (2019) Relative afferent pupillary defect screening through transfer learning. IEEE Journal of Biomedical and Health Informatics 24 (3), pp. 788–795. Cited by: §1.
  • [25] J. Zhao, Y. Zhang, X. He, and P. Xie (2020) COVID-ct-dataset: a ct scan dataset about covid-19. arXiv preprint arXiv:2003.13865. Cited by: §1, §3.1, §4.