Edge-Level Explanations for Graph Neural Networks by Extending Explainability Methods for Convolutional Neural Networks

11/01/2021
by   Tetsu Kasanishi, et al.
The University of Tokyo
0

Graph Neural Networks (GNNs) are deep learning models that take graph data as inputs, and they are applied to various tasks such as traffic prediction and molecular property prediction. However, owing to the complexity of the GNNs, it has been difficult to analyze which parts of inputs affect the GNN model's outputs. In this study, we extend explainability methods for Convolutional Neural Networks (CNNs), such as Local Interpretable Model-Agnostic Explanations (LIME), Gradient-Based Saliency Maps, and Gradient-Weighted Class Activation Mapping (Grad-CAM) to GNNs, and predict which edges in the input graphs are important for GNN decisions. The experimental results indicate that the LIME-based approach is the most efficient explainability method for multiple tasks in the real-world situation, outperforming even the state-of-the-art method in GNN explainability.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

06/23/2021

Reimagining GNN Explanations with ideas from Tabular Data

Explainability techniques for Graph Neural Networks still have a long wa...
06/16/2021

SEEN: Sharpening Explanations for Graph Neural Networks using Explanations from Neighborhoods

Explaining the foundations for predictions obtained from graph neural ne...
06/03/2020

XGNN: Towards Model-Level Explanations of Graph Neural Networks

Graphs neural networks (GNNs) learn node features by aggregating and com...
09/20/2021

A Meta-Learning Approach for Training Explainable Graph Neural Networks

In this paper, we investigate the degree of explainability of graph neur...
12/31/2020

Explainability in Graph Neural Networks: A Taxonomic Survey

Deep learning methods are achieving ever-increasing performance on many ...
07/15/2021

Algorithmic Concept-based Explainable Reasoning

Recent research on graph neural network (GNN) models successfully applie...
05/31/2019

Explainability Techniques for Graph Convolutional Networks

Graph Networks are used to make decisions in potentially complex scenari...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

In recent years, there has been considerable number of studies with regard to explaining the decisions of deep learning models. While deep learning models have been used to improve the accuracy of various tasks, they encounter a challenge: it is difficult to understand the basis of their decisions. This makes it difficult to use deep learning models for tasks that require explanations, such as medical image processing. Explanations are also helpful in understanding the model’s behavior. For these reasons, research on understanding the rationale for decisions of deep learning models has been widely conducted.

GNNs are deep learning models that take graph data as inputs. In many real-world situations, data are represented in the form of graphs. For example, molecular structures can be represented as graphs where nodes are atoms and edges are chemical bonds. Therefore, GNNs are becoming powerful tools that can be applied to a variety of tasks such as drug discovery. However, similar to other deep learning models, GNNs cannot present the reasoning behind their decisions.

In this study, we extend several explainability methods for CNNs to GNNs to calculate the importance of the edges for the models’ outputs. Reference [13] states that graph convolution in GNNs is generalized from 2-D convolution in CNNs because both take the weighted sum of information from neighborhood nodes/pixels. This similarity between GNNs and CNNs makes it reasonable to apply techniques used for CNNs to GNNs. Thus, we investigate LIME [8], Gradient-Based Saliency Maps [11], and Grad-CAM [9]

. These are frequently used in computer vision tasks. Although the techniques specified in this study are not novel, our contribution is that we extend off-the-shelf explainability methods in computer vision to GNNs and experimentally show that LIME is the best approach. Furthermore, we found LIME to be superior to a state-of-the-art method 

[14].

Ii Related Work

Ii-a Formulation of GNNs

Although a variety of GNN methods have been proposed, most of them can be expressed in the framework of message passing [1] as follows. Let be an input graph of GNNs,

be the feature vector of the node

in at the th message passing phase, and be the set of nodes adjacent to the node . The message passing operation is expressed by the following equation:

(1)

Here, and are functions defined for different methods, where collects information from neighboring nodes, and updates the feature vector of each node based on the neighboring information. By performing these message passing operations times, a higher-order feature vector of the node can be obtained.

For a graph classification task, the feature vector for the entire graph is then calculated from each node’s feature vector by taking its summation or mean, for example. By performing the above-mentioned operations, feature vectors for each node or each graph can be obtained. Finally, these feature vectors are fed into fully connected layers.

Ii-B Explainability Methods

There has been considerable amount of research on explaining the decisions of deep learning models. For example, Ribeiro et al. [8]

proposed LIME that can be applied to machine learning models in general, including deep learning models. Furthermore, significant research on explainability methods that are designed for CNN models has been carried out. For example, Simonyan et al. 

[11] proposed Gradient-Based Saliency Maps. This method simply differentiated the output of the model with respect to each pixel and created a heat map. Another explainability method for CNN models is Grad-CAM [9]. Grad-CAM used the values of feature maps in CNN models and the differential of the output with respect to them to calculate each pixel’s importance. Then, a heat map was created.

Contrary to the explainable models for CNNs, there are fewer works that explain GNN models. For example, Pope et al. [6] extended Grad-CAM to GNNs and calculated the importance of each node for the output of the GNN model. Note that this approach is designed to explain the contribution of the nodes only. GNNExplainer [14] is an explainability method for GNNs that explains which parts of edges and which dimensions of the node features are responsible for GNN model’s outputs. In addition, there are several approaches of explainability for edges in graphs. Please refer to [15] for a comprehensive survey of this field.

Iii Proposed Methods

In this section, we extend the explainability methods for CNNs to GNNs to predict which edges are important for GNN decisions. We define an important edge as “an edge that contributes to the increase of the GNN model’s output.”

Iii-a LIME-Based Method

First, we propose a LIME-based [8] explainability method for GNNs. In the message passing operation described in the previous section, each node gathers the features of the adjacent nodes. Thus, the edges are the paths through which node features pass. Therefore, we define the operation to multiply node features passing through a certain edge by a weight as “perturbing an edge.” In the original LIME method, each part of the inputs is either removed completely or preserved. The perturbing operation of multiplying information by continuous weight is different from the simple removal operation of the original LIME algorithm.

Let be the number of edges in the input graph ,

be the probability of perturbing each edge, and

be the number of samples. and

are both hyperparameters. First,

graphs in which each edge of is perturbed with the probability of are input to the GNN model. Then, the combination of vectors that indicates which edge was perturbed and the output value of the model for are obtained. Here, each dimension of corresponds to each edge of

and shows the weight by which the information passing through each edge is multiplied. A linear regression model is then constructed to predict

from , and the importance of each edge is obtained as the coefficients of the linear regression model. As the linear regression model, we use Lasso [12]

, which has a regularization term that limits the number of nonzero coefficients. The loss function used for training the Lasso model

is the weighted mean squared error (MSE):

(2)

where represents ’s coefficients, and and are both hyperparameters.

Iii-B Saliency Map-Based Method

In this section, we propose an extension of Saliency Maps [11] to calculate each edge’s importance. Here, we consider GGNN [3] as the model to be explained for example, and assume the model’s output to be . In GGNN, the message passing operation is represented as follows:

(3)

where is a learnable matrix. As the edges can be considered as pathways through which the node information propagates, one can assume that passes through the edge connecting the node and the node at the th message passing phase. This operation is performed on all nodes in the graph, so the information that eventually passes through is and . Therefore, we consider the importance of to be the sum of information and . Let the size of be -by-, the length of be , and the th row component of be . Then, as the importance of , is calculated as follows:

(4)

Iii-C Grad-CAM-Based Method

In this section, we propose an extension of Grad-CAM [9] to calculate each edge’s importance. Here, we consider the same GGNN [3] model as in the previous subsection for example. As in the Saliency Map-based method, the importance of can be considered as the sum of the importance of and . In this method, the importance of them is calculated by the method in [6], and then summed. The specific method is described below: feature vectors from to are converted to row vectors. Then, feature matrix are created by stacking them. The element of corresponds to the th row component of . First, the weight for ’s column is calculated as follows:

(5)

Second, let be a vector whose th row component is . Then, the importance of is obtained by calculating the dot product of and . Finally, the importance of , is calculated by adding the importance of vector and as follows:

(6)

Iii-D Baseline Methods

We compare these methods to three baseline methods: GNNExplainer [14], the removal method, and the random method. In the removal method, edges are removed from the input graph one by one and are input to the model. Then, the importance of the edge is calculated by measuring the extent to which the output value decreases compared to the original value. In the random method, each edge’s importance is determined randomly.

Iv Experiments

Iv-a Experimental Setup

We use three evaluation tasks: synthetic test, benzene ring test, and removal test. In the synthetic test, we follow the setting in GNNExplainer [14] and use the dataset called BA-shapes. This is a node classification dataset that contains a randomly generated graph with 300 nodes and 80 five-node “house”-structured network motifs attached to it. In this dataset each node has no node feature values and nodes in the base graph are labeled with 0; the ones located in the “house” are labeled with 1, 2, or 3. First, a GCN [2]

model is trained to predict each node’s label. If this model predicts that the node located in the “house” has a label other than 0, then the ground truth of the basis of this prediction can be regarded as the “house”-structured motif. The evaluation metrics is the percentage of the ground truth edges that are included in the top five edges in terms of importance calculated by each method (i.e., recall rate).

In the second evaluation task, the benzene ring test, we use the QM9 dataset [7] that contains molecule graphs with atoms as nodes and chemical bonds as edges. We trained a GGNN [3] model that performs binary classification if a molecule is aromatic. As a molecule being aromatic is determined only by the presence of a benzene ring in the molecule, we can determine the ground truth of explanation as the five or six edges that form the benzene ring. The evaluation metrics is the percentage of the ground truth edges that are included in the top five or six edges in terms of importance (i.e., recall rate).

The third evaluation task, the removal test, is motivated by the evaluation tasks of explainability methods for CNNs proposed by [5]. First, the edges in the graph are removed from one to five in the order of importance of the edges obtained by each explainability method. Second, these graphs are input to the GNN model to obtain the output values. Subsequently, the number of removed edges is plotted on the horizontal axis, and the decrease in the predicted value compared to the original value is plotted on the vertical axis to obtain the Area Under the Curve (). The larger the , the better the performance of the explainability method. Let be the output of the GNN model when edges are removed. can be calculated by the following equation:

(7)

We use three datasets for the removal test: Cora 

[4], Coauthor [10], and Amazon [10]. Cora is a citation network dataset in which nodes are documents and edges are citation links. Coauthor is a coauthorship network dataset in which nodes are authors and are connected by an edge if they coauthored a paper. Amazon is a co-purchase graph dataset in which nodes are products and edges indicate that two goods are frequently bought together. We trained three GCN [2] models that predict each node’s label for these three datasets respectively.

Iv-B Results

Examples of explanation results for the benzene ring test are shown in Fig. 1, which shows that the importance of edges forming benzene rings is relatively high in all methods except for GNNExplainer. In particular, LIME assigns high importance to the benzene ring edges only. LIME can determine the important edges specifically.

Synthetic test (accuracy) Benzene ring test (accuracy) Removal test ()
Task Node classification Graph classification Node classification
Model GCN [2] GGNN [3] GCN [2]
Dataset BA-shapes [14] QM9 [7] Cora [4] Coauthor [10] Amazon [10]
LIME [8] 0.67
Saliency Maps [11] 0.91 0.62 1.78 1.14 0.27
Grad-CAM [9] 0.20 0.88 0.21 0.03 0
GNNExplainer [14] (Reproduction) 0.87 0.44 0.57 0.60 0.08
GNNExplainer [14] (Reported) - - - -
Removal 0.80 0.90
Random 0.15 0.36 0.18 0.02 0
TABLE I: Results of the synthetic test, benzene ring test, and removal test.
Cora [4] Coauthor [10] Amazon [10]
LIME [8] 0.98 s 11.4 s 79.3 s
Removal 3.63 s 119.8 s 704.6 s
TABLE II: The average computational costs of LIME and the removal method for each dataset.
(a) LIME
(b) Grad-CAM
(c) GNNExplainer
Fig. 1: Examples of explanation results for the benzene ring test.

The results of the synthetic test, benzene ring test, and removal test are shown in Table I. In the synthetic test, GNNExplainer (Reported) yields the best performance, followed by saliency maps. However, in the benzene ring test, LIME shows the best performance followed by the removal method and Grad-CAM, whereas GNNExplainer is the worst among all methods except for the random method. In the removal test, LIME and the removal method perform the best followed by saliency maps, whereas GNNExplainer and Grad-CAM are worse than these methods. Although GNNExplainer performs well on the synthetic dataset, it does not perform well on real-world datasets. On the other hand, LIME is generally better than the other algorithms in our experiments.

In the removal test, the performances of LIME and the removal method are comparable, but the removal method requires high computational costs because it removes each edge one by one. Table II shows the average computational costs of LIME and the removal method in the three datasets for the removal test. The computational cost of the removal method is about three to ten times larger than that of LIME. Therefore, LIME is the best explainability method in terms of both performance and computational cost.

Iv-C Discussion

LIME has the best performance among the three proposed methods in the real-world situations. LIME directly perturb several edges in the input graph, therefore it can take interactions between edges into account. In contrast, Grad-CAM and saliency maps calculate each edge’s importance independently. This capability of considering the interactions between edges would make the LIME’s score the best.

However, in the synthetic test, GNNExplainer outperforms LIME. In the synthetic dataset, each node has no feature values. This is different from the real-world datasets, where each node has unique feature values. As mentioned in Section III, edges are the paths through which node information passes, but no meaningful information passes through the edges in the synthetic dataset. This absence of meaningful information passing through the edges would make the perturbing operation of LIME less effective.

V Conclusion

In this study, we extended explainability methods for CNNs to GNNs, i.e., LIME, Grad-CAM, and Gradient-Based Saliency Maps, to calculate each edge’s importance for the outputs. It was found that the performance of the LIME-based approach was the best in real-world situations in terms of both accuracy and computational cost.

References

  • [1] J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals, and G. E. Dahl (2017) Neural message passing for quantum chemistry. In Proceedings of the International Conference on Machine Learning, pp. 1263–1272. Cited by: §II-A.
  • [2] T. N. Kipf and M. Welling (2017) Semi-supervised classification with graph convolutional networks. In Proceedings of the International Conference on Learning Representations, Cited by: §IV-A, §IV-A, TABLE I.
  • [3] Y. Li, D. Tarlow, M. Brockschmidt, and R. Zemel (2016) Gated graph sequence neural networks. In Proceedings of the International Conference on Learning Representations, Cited by: §III-B, §III-C, §IV-A, TABLE I.
  • [4] A. K. McCallum, K. Nigam, J. Rennie, and K. Seymore (2000) Automating the construction of internet portals with machine learning. Information Retrieval 3 (2), pp. 127–163. Cited by: §IV-A, TABLE I, TABLE II.
  • [5] G. Montavon, W. Samek, and K. Müller (2018) Methods for interpreting and understanding deep neural networks. Digital Signal Processing 73, pp. 1–15. Cited by: §IV-A.
  • [6] P. E. Pope, S. Kolouri, M. Rostami, C. E. Martin, and H. Hoffmann (2019) Explainability methods for graph convolutional neural networks. In

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition

    ,
    pp. 10772–10781. Cited by: §II-B, §III-C.
  • [7] R. Ramakrishnan, P. O. Dral, M. Rupp, and O. A. Von Lilienfeld (2014) Quantum chemistry structures and properties of 134 kilo molecules. Scientific data 1 (1), pp. 1–7. Cited by: §IV-A, TABLE I.
  • [8] M. T. Ribeiro, S. Singh, and C. Guestrin (2016)

    ” Why should i trust you?” explaining the predictions of any classifier

    .
    In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144. Cited by: §I, §II-B, §III-A, TABLE I, TABLE II.
  • [9] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra (2017) Grad-cam: visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Cision, pp. 618–626. Cited by: §I, §II-B, §III-C, TABLE I.
  • [10] O. Shchur, M. Mumme, A. Bojchevski, and S. Günnemann (2018) Pitfalls of graph neural network evaluation. In Advances in Neural Information Processing Systems, Cited by: §IV-A, TABLE I, TABLE II.
  • [11] K. Simonyan, A. Vedaldi, and A. Zisserman (2014) Deep inside convolutional networks: visualising image classification models and saliency maps. In Proceedings of the International Conference on Learning Representations, Cited by: §I, §II-B, §III-B, TABLE I.
  • [12] R. Tibshirani (1996) Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society: Series B (Methodological) 58 (1), pp. 267–288. Cited by: §III-A.
  • [13] Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, and S. Y. Philip (2020) A comprehensive survey on graph neural networks. IEEE Transactions on Neural Networks and Learning Systems 32 (1), pp. 4–24. Cited by: §I.
  • [14] R. Ying, D. Bourgeois, J. You, M. Zitnik, and J. Leskovec (2019) Gnnexplainer: generating explanations for graph neural networks. Advances in neural information processing systems 32, pp. 9240–9251. Cited by: §I, §II-B, §III-D, §IV-A, TABLE I.
  • [15] H. Yuan, H. Yu, S. Gui, and S. Ji (2020) Explainability in graph neural networks: a taxonomic survey. CoRR abs/2012.15445. External Links: 2012.15445 Cited by: §II-B.