Efficient Interpretation of Deep Learning Models Using Graph Structure and Cooperative Game Theory: Application to ASD Biomarker Discovery

12/14/2018 ∙ by Xiaoxiao Li, et al. ∙ Yale University 0

Discovering imaging biomarkers for autism spectrum disorder (ASD) is critical to help explain ASD and predict or monitor treatment outcomes. Toward this end, deep learning classifiers have recently been used for identifying ASD from functional magnetic resonance imaging (fMRI) with higher accuracy than traditional learning strategies. However, a key challenge with deep learning models is understanding just what image features the network is using, which can in turn be used to define the biomarkers. Current methods extract biomarkers, i.e., important features, by looking at how the prediction changes if "ignoring" one feature at a time. In this work, we go beyond looking at only individual features by using Shapley value explanation (SVE) from cooperative game theory. Cooperative game theory is advantageous here because it directly considers the interaction between features and can be applied to any machine learning method, making it a novel, more accurate way of determining instance-wise biomarker importance from deep learning models. A barrier to using SVE is its computational complexity: 2^N given N features. We explicitly reduce the complexity of SVE computation by two approaches based on the underlying graph structure of the input data: 1) only consider the centralized coalition of each feature; 2) a hierarchical pipeline which first clusters features into small communities, then applies SVE in each community. Monte Carlo approximation can be used for large permutation sets. We first validate our methods on the MNIST dataset and compare to human perception. Next, to insure plausibility of our biomarker results, we train a Random Forest (RF) to classify ASD/control subjects from fMRI and compare SVE results to standard RF-based feature importance. Finally, we show initial results on ranked fMRI biomarkers using SVE on a deep learning classifier for the ASD/control dataset.



There are no comments yet.


page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Autism spectrum disorder (ASD) affects the structure and function of the brain. To better target the underlying roots of ASD for diagnosis and treatment, efforts to identify reliable biomarkers are growing [1]. Deep learning models have been used in fMRI analysis [2], which is used to characterize the brain changes that occur in ASD [3]

. However, how the different brain regions coordinate on the deep convolutional neural network (DNN) has not been previously explored. When features are not independent, Shapley value explanation (SVE) is a useful tool to study each feature’s contribution

[4, 5, 6]. The methods are based on fundamental concepts from cooperative game theory [7], which assigns a unique distribution (among the players) of a total surplus generated by the coalition of all players in the cooperative game. However, if the interactive features’ dimensions are high, SVE becomes computationally consuming (exponential time complexity).

The innovations of this study include: 1) We applied SVE on interactive features’ prediction power analysis; 2) Our proposed method does not require retraining the classifier; 3) To handle the high dimensional inputs of the DNN classifier, we propose two methods to reduce the dimension of SVE testing features, once the underlying graph structure of features is defined; and 4) Different from kernel SHAP proposed in [4], as a model interpreter, our proposed methods do not require model approximation. In section 2, we introduce the background on cooperative game theory. In section 3, we propose the two approaches to approximate Shapley value. We also show the approximation is true under certain assumptions. Three experiments are given in section 4 to show the feasibility and advantage of our proposed methods.

2 Background on Cooperative Game Theory

2.1 Shapley Value

Our approach to analyzing the contributions of individual nodes to the overall network is the assignment of Shapley values. The Shapley value is a means of fairly portioning the collective profit attained by a coalition of players, based on the relative contributions of the players in some game. Let be the set of all the players, be a subset of players forming a coalition within this game, and be the function that assigns a real numbered profit to the subset of players. By definition, for any , , here is the empty set. A Shapley value is assigned by a Shapley function , which associates each player in with a real number and which is uniquely defined by the following axioms [7]: 1. Efficiency; 2. Symmetry; 3. Dummy; and 4. Additivity. In our context, we are interested in the brain regions that discriminate ASD and control subjects. Classification prediction score is the total value to be distributed, and each brain region is a player, which will be assigned a unique reward (i.e. importance score) by its contribution to the classifier.

2.2 Challenges Of Using Shapley Value

While Shapley values give a more accurate interpretation of the importance of each player in a coalition, their calculation is expensive. When the number of features (i.e., players in the game) is a massive , the computational complexity is , which is especially expensive if the model is slow to run. We propose addressing this computational challenge by utilizing the graph structure of the data. Consider the case when the underlying graph structure of the data is sparsely connected, e.g., the brain functional network. Under this observation, we propose two approaches (Fig. 1) to simplify Shapley value calculation. Method I only considers the centralized coalition of each player to reduce the number of permutation cases by assigning weight 0 to features that rarely collaborate. Method II first applies community detection on the feature connectivity network to cluster similar features (forming different games and teams), then within the selected communities, assigns a feature’s contribution by SVE.

Figure 1:

a) Toy visualization of graph structure of the input data. When estimating the contribution of feature

(yellow), b) C-SVE considers ’s directly connected neighbors (red) and c) H-SVE considers the community (red) to which belongs.

3 Methods

In classification tasks, only certain features in a given input provide evidence for the classification decision. For a given prediction, the classifier assigns a relevance value to each input feature with respect to a class label

. The probability of class

for input is given by the predictive score of the DNN model where is the domain for input and each component of the output of represents the conditional probability of assigning a class label, i.e. .

The basic idea used in prediction difference analysis [8] is that the relevance of a feature can be estimated by measuring how the prediction changes if the feature is unknown. Here we extend this setting by considering the interaction of a set of different features instead of examining the features one by one. Denote the image corrupted at a feature set as . To calculate , following [8], we marginalize out the corrupted feature set :


Denote the importance score evaluation function for input . The prediction power for the th feature is the weighted sum of all possible marginal contributions:


Similar to [5], we introduce the importance score of a feature set


which can be interpreted as the negative of the expected number of bits required to encode the output of the model based on the input .
Theorem 1 is a cooperative form game and corresponds to the game’s Shapley value.
The proof can be directly borrowed from [6] showing it has a unique solution and satisfies Axioms 1-4.

An illustrative example is Boolean logic expression, when or is one and zero otherwise for , . Suppose and the base of the logarithm is 2. We aim to find the contributions of predicting 1 given input . If both values of are unknown, one can predict that the probability of the result being 1 is . We have , and total value . Therefore the contributions of each feature are: and . The generated contributions reveal that both features contribute the same amount towards the prediction being 1 given input . In addition, we can interpret there is coalition between the two players, since . However, we will get the myopic conclusion that both features are unimportant by only ignoring a single feature, because given one feature , is for sure.

With the underlying structure of data, we have prior knowledge that some features of the data set are barely connected; in other words, there is very likely no coalition between these features. We define a connected graph with nodes and edges . Given an adjacency matrix of the undirected graph (for example, the Pearson correlation of mean time series of brain regions), we use a threshold

to binarize

, i.e. when and zero otherwise, resulting in a sparsely connected graph.

3.1 Method I: Centralized Shapley Value Explanation (C-SVE)

For a given feature , its 1-step connected neighborhood is defined by the set . As an approximation, we propose Centralized Shapley Value Explanation (C-SVE), which only calculates the marginal contribution when a feature collaborates with its neighbors.
Definition 1. Given classifier and sample , the C-SVE assigns the prediction power on feature by


The coefficients in front of the marginal contributions is a weighted transformation of the original SVE form (in Eq. (2)), where instead of assigning each permutation the same weight, sets not belonging to the neighborhood were assigned 0 weight. In practice, we can reject the non-coalition permutations and average the Shapley values for the remaining terms.
Theorem 2 We have almost surely if we have and for any .

The proof is shown in Appendix 0.A. It is important to show that our proposed approximation is a good one. We can easily check that for , the angle between the average time series in ROI and in ROI satisfies , which corresponds to the small edge weight in the graph that we created using Pearson correlation. Therefore we assume that .

3.2 Method II: Hierarchical Shapley Value Explanation (H-SVE)

In method II, we approximate the Shapley value by a hierarchical approach: 1) detect communities in the graph, then 2) apply SVE in each community individually.

3.2.1 Modularity-based community detection

We use the same undirected graph architecture defined in Method I, but use greedy modularity method [9] to divide all the features into non-overlapping communities. Then the whole features sets can be expressed by a combination of non-overlapping communities and the features in one community only cooperate within the group, hence are independent to those in the different communities. Therefore we can define different Shapley value rules in the different communities, but the Shapley values are comparable within and across communities.

3.2.2 Shapley value of each feature in the community

With the assumption that different communities of players do not play in a game (rarely connect), we assume the communities of features are independent. In order to compare the feature importance in the whole brain, firstly we define the Shapley value for feature subset in community as


Definition 2. Suppose the features are clustered into . The H-SVE assigns the prediction power of feature in by


Theorem 3. When , we have almost surely.

The proof is similar to the proof for Theorem 2.

3.3 Monte Carlo Approximation For Large Neighborhood

Input: , a given instance; , number of samples; , importance score function

2:for  to  do
3:     choose a random permutation of features
4:     choose a random instance from the training dataset
8:end for

(where is the neighborhood of in C-SVE or community of in H-SVE)

Algorithm 1 Approximating the prediction power of th feature’s value

Although we simplify SVE by C-SVE or H-SVE methods, computation may still be challenging. For example: 1) in C-SVE, feature node to be analyzed is densely connected with the other nodes and 2) in H-SVE, there exists large communities. Based on the alternative formulation of the Shapley value (Eq. (7)), let be the set of all ordered permutations of . Let be the set of players which are predecessors of player in the order , we have


We use the following Monte Carlo (MC) algorithm to approximate equation (4) and (6). We define:


Then the unbiased MC approximation can be expressed as in Algorithm 1. Given , if , we will apply MC approximation.

4 Experiments and Results

4.1 Validation on MNIST Dataset

In order to show the feasibility of the proposed two approaches, we test the explanation results on MNIST dataset [10], where we can compare to human judgment about the feature importance. We trained a convolutional network (Conv2D(32) Conv2D(64) Dense(128) Dense(10)) achieving accuracy. We parcellate the image into ROIs using slic [11] to mimic the setting of detecting saliency brain ROI for identifying ASD. Denoting the distance between the center of ROI and ROI as , we define the connection between ROI and as . Here we use .

Figure 2: The predictive power for identifying (a) the digit 8 by b) C-SVE, c) H-SVE, and d) single ROI explanation. The prediction difference after corrupting the ROIs which contribute in total are denoted on the left corner.

The results are shown in Fig. 2, where we uniformly divided each ROI’s importance score by the number of pixels in the ROI to mitigate dominance by large ROIs and divided by for visualization. The interpretation results matched our human perception that the "x cross" shape in the center is important for recognizing digit . Compared with single ROI testing, our proposed methods assigned smoother and more widely distributed importance scores to more pixels. To examine the effect of important ROIs on prediction, we corrupted pixels whose importance power added up to of the positive importance scores. We then compared the difference between the original prediction probability of digit 8 and the new prediction probability using the corrupted image. C-SVE and H-SVE could better fool the classifier, which decreased the prediction probability by 0.8939 and 0.9089 respectively, compared to only a 0.2043 decrease for the single ROI method. Some ROIs may not contribute to classification on their own but influence the results when combined with other regions. In the single ROI method, these ROIs will be assigned 0 importance score. However, by our proposed SVE method these ROIs can be discovered.

4.2 ASD Task-fMRI Dataset and Underlying Graph Structure

We tested our methods on a group of 82 children with ASD and 48 age and IQ-matched healthy controls used for training the classifiers. Each subject underwent a biological motion perception task [3] fMRI scan (BOLD, TR = 2000ms, TE = 25ms, flip angle = , voxel size ) acquired on a Siemens MAGNETOM Trio TIM 3T scanner. We randomly split of the data for training, for validation of model parameters, and for testing.

The Automated Anatomical Labeling (AAL) atlas [12] was used to parcellate the brain into 116 regions. For each subject, we computed the adjacency matrix using Pearson correlation. We averaged the adjacency matrix over the patient subjects in the training data and binarized the edges based on whether its weight is larger than average weight (assigning 1) or not (assigning 0). For H-SVE method, we obtained the non-overlapping community clustering for each subject by greedy modularity method [13], which resulted in 10 communities.

4.3 Comparison with Random Forest-based Feature Importance

As an additional "reality check" for our method, we apply a Random Forest (RF) strategy (1000 trees) to the same dataset ( accuracy on testing set) and compare the results, using the RF-based feature importance (mean Gini impurity decrease) as a form of standard method for comparison. Instead of inputting the entire fMRI image, we input the node-weighted modularity, which is defined by where is the partial correlation coefficient between ROI and . Therefore the inputs are vectors. Based on axiom 4, we can treat each subject as a game and each ROI as a player, and then do group-based analysis by adding over the subjects to investigate ROI ’s importance. For a fair comparison, like in RF, we used all of the training dataset. The interpretation results are shown in Fig. 3. Seven of the top 10 important ROIs discovered by C-SVE and H-SVE overlapped with RF interpretation.

Figure 3: The relative importance scores of the top 10 important ROIs assigned by Random Forest and their corresponding importance scores in C-SVE and H-SVE. The importance rank of each ROI is denoted on the bar.

4.4 Explaining The ASD Brain Biomarkers Used In Deep Convolutional Neural Network Classifier

Figure 4: 2CC3D network architecture
C-SVE H-SVE Single Region
0.720 (0.221) 0.693 (0.144) 0.335 (0.060)
0.714 0. 714 0.428

( = decrease in test prediction probability, = decrease in test accuracy)

Table 1: Prediction Decrease After Corrupting Important ROIs for the DNN
Figure 5: Top 20 predictive biomarkers detected by a) C-SVE and b) H-SVE for the deep learning classifier. More yellow ROIs signify higher importance.

Here we chose the deep neural network 2CC3D (Fig. 4) described in [14]

using each voxel’s mean and standard deviation as two channel input. We start with preprocessed 3D fMRI volumes downsampled to

. We defined the original fMRI sequence as , the mean-channel sequence as and the standard deviation-channel as . For any in ,


where is the temporal sliding window size. It achieved classification accuracy when on the task-fMRI dataset. Running on a workstation with a Nvidia 1080 Ti GPU, testing all 7 ASD subjects in the testing dataset took s and s for C-SVE and H-SVE, respectively, using 1000 samples for MC approximation, which converged to the stable ranks. As in the MNIST experiment, we divided by the number of voxels in ROI , avoiding domination by large ROIs.

The contribution/prediction power of the regions (relative to the most important one) averaged over testing subjects are illustrated in Fig. 5 and listed in Fig. 6. There are 19 overlapping ROIs out of the top 20 important ROIs found by C-SVE and H-SVE, although the orders were different. The Spearman rank-order correlation coefficient [15] of the importance score ranks of all the ROIs explained by both methods was . These detected regions were consistent with the previous findings in the literature [2, 3]. Also, we used Neurosynth [16] to decode the functional keywords associated with the overlapping biomarkers found by C-SVE and H-SVE (Fig. 7). These top regions are positively related to self-referential/perspective-taking concepts (higher level social communication) and negatively related to more basic social and language concepts (lower level skills). Using the manner described in Eq. (1), we corrupted the important ROIs ( of the positive importance scores summing up in order) determined by C-SVE, H-SVE, and single region testing separately and calculated the average decrease in probability (showing mean and standard deviation) and accuracy for the subjects in the testing set. The results are listed in Table 1.

Notice that the top 10 biomarkers we discovered using SVE in the RF model were different from the ones found in the 2CC3D model. Possible reasons are: 1) the inputs are different. 2CC3D used activation whereas RF used connectivity and 2CC3D used ASD subjects in testing set whereas RF used all the training set; 2) the prediction accuracy of RF model is much lower than 2CC3D; and 3) our proposed methods performed as a model interpreter rather than data interpreter, which may have different sensitivity response to the different models.

Figure 6: The relative importance scores of the top 20 ROIs assigned by C-SVE and their corresponding importance scores in H-SVE for the deep learning model.
Figure 7: a) The top positive correlations and b) the top negative correlations between deep learning model biomarkers and functional keywords.

5 Conclusion And Future Work

Considering the interaction of features, we proposed two approaches (C-SVE and H-SVE) to analyze feature importance based on SVE, using the underlying graph structure of the data to simplify the calculation of Shapley value. C-SVE only considers the centralized interaction, while H-SVE uses a hierarchical approach to first cluster the feature communities, then calculate the Shapley value in each community. When a feature’s neighborhood/community still contains a large number of features, we apply MC integration method for further approximation. Experiments on the MNIST dataset showed our proposed methods can capture more interpretable features. Comparing the results with Random Forest feature interpretation on the ASD task-fMRI dataset, we further validated the accuracy and feasibility of the proposed methods. When applying both methods on a deep learning model, we discovered similar possible brain biomarkers, which matched the findings in the literature and had meaningful neurological interpretation. The pipeline can be generalized to other feature importance analysis problems, where the underlying graph structure of features is available.

Our future work includes testing the methods on different atlases, graph building methods, and community clustering methods, etc. In addition, the interaction score is embedded in the proposed algorithms. It can be disentangled to understand the interaction between the features.

Appendix 0.A Appendix: Proof of Theorem 2

For any subset , we use the short notation and , noting that . Denoting , then we have


Abbreviating as , let , . Then


We have , since . Then . Thus, we can multiply the quotient in Eq. (12) by :


We have , since . So


Since , we have . Hence . Rewrite equations (4) and (2) as


then the expected error between and is


where we use Therefore we have .


  • [1] A. A. Goldani, S. R. Downs, F. Widjaja, B. Lawton, and R. L. Hendren, “Biomarkers in autism,” Frontiers in psychiatry, vol. 5, 2014.
  • [2] X. Li, N. C. Dvornek, J. Zhuang, P. Ventola, and J. S. Duncan, “Brain biomarker interpretation in asd using deep learning and fmri,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 206–214, Springer, 2018.
  • [3] M. D. Kaiser, C. M. Hudac, S. Shultz, S. M. Lee, C. Cheung, A. M. Berken, B. Deen, N. B. Pitskel, D. R. Sugrue, A. C. Voos, et al., “Neural signatures of autism,” Proceedings of the National Academy of Sciences, vol. 107, no. 49, pp. 21223–21228, 2010.
  • [4] S. M. Lundberg and S.-I. Lee, “A unified approach to interpreting model predictions,” in Advances in Neural Information Processing Systems, pp. 4765–4774, 2017.
  • [5] J. Chen, L. Song, M. J. Wainwright, and M. I. Jordan, “L-shapley and c-shapley: Efficient model interpretation for structured data,” arXiv preprint arXiv:1808.02610, 2018.
  • [6] I. Kononenko et al., “An efficient explanation of individual classifications using game theory,” Journal of Machine Learning Research, vol. 11, no. Jan, pp. 1–18, 2010.
  • [7] L. S. Shapley, “A value for n-person games,” Contributions to the Theory of Games, vol. 2, no. 28, pp. 307–317, 1953.
  • [8] L. M. Zintgraf, T. S. Cohen, T. Adel, and M. Welling, “Visualizing deep neural network decisions: Prediction difference analysis,” arXiv preprint arXiv:1702.04595, 2017.
  • [9] A. Clauset, M. E. Newman, and C. Moore, “Finding community structure in very large networks,” Physical review E, vol. 70, no. 6, p. 066111, 2004.
  • [10] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
  • [11] R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, S. Süsstrunk, et al., “Slic superpixels compared to state-of-the-art superpixel methods,” IEEE transactions on pattern analysis and machine intelligence, vol. 34, no. 11, pp. 2274–2282, 2012.
  • [12] N. Tzourio-Mazoyer, B. Landeau, D. Papathanassiou, F. Crivello, O. Etard, N. Delcroix, B. Mazoyer, and M. Joliot, “Automated anatomical labeling of activations in spm using a macroscopic anatomical parcellation of the mni mri single-subject brain,” Neuroimage, 2002.
  • [13] M. Newman, Networks. Oxford university press, 2018.
  • [14] X. Li, N. C. Dvornek, X. Papademetris, J. Zhuang, L. H. Staib, P. Ventola, and J. S. Duncan, “2-channel convolutional 3d deep neural network (2cc3d) for fmri analysis: Asd classification and feature learning,” in Biomedical Imaging (ISBI 2018), 2018 IEEE 15th International Symposium on, pp. 1252–1255, IEEE, 2018.
  • [15] R. C. Young, J. T. Biggs, V. E. Ziegler, and D. A. Meyer, “A rating scale for mania: reliability, validity and sensitivity,” The British journal of psychiatry, vol. 133, no. 5, pp. 429–435, 1978.
  • [16] T. Yarkoni, R. A. Poldrack, T. E. Nichols, D. C. Van Essen, and T. D. Wager, “Large-scale automated synthesis of human functional neuroimaging data,” Nature methods, vol. 8, no. 8, p. 665, 2011.