I Introduction
In recent years, there has been considerable number of studies with regard to explaining the decisions of deep learning models. While deep learning models have been used to improve the accuracy of various tasks, they encounter a challenge: it is difficult to understand the basis of their decisions. This makes it difficult to use deep learning models for tasks that require explanations, such as medical image processing. Explanations are also helpful in understanding the model’s behavior. For these reasons, research on understanding the rationale for decisions of deep learning models has been widely conducted.
GNNs are deep learning models that take graph data as inputs. In many realworld situations, data are represented in the form of graphs. For example, molecular structures can be represented as graphs where nodes are atoms and edges are chemical bonds. Therefore, GNNs are becoming powerful tools that can be applied to a variety of tasks such as drug discovery. However, similar to other deep learning models, GNNs cannot present the reasoning behind their decisions.
In this study, we extend several explainability methods for CNNs to GNNs to calculate the importance of the edges for the models’ outputs. Reference [13] states that graph convolution in GNNs is generalized from 2D convolution in CNNs because both take the weighted sum of information from neighborhood nodes/pixels. This similarity between GNNs and CNNs makes it reasonable to apply techniques used for CNNs to GNNs. Thus, we investigate LIME [8], GradientBased Saliency Maps [11], and GradCAM [9]
. These are frequently used in computer vision tasks. Although the techniques specified in this study are not novel, our contribution is that we extend offtheshelf explainability methods in computer vision to GNNs and experimentally show that LIME is the best approach. Furthermore, we found LIME to be superior to a stateoftheart method
[14].Ii Related Work
Iia Formulation of GNNs
Although a variety of GNN methods have been proposed, most of them can be expressed in the framework of message passing [1] as follows. Let be an input graph of GNNs,
be the feature vector of the node
in at the th message passing phase, and be the set of nodes adjacent to the node . The message passing operation is expressed by the following equation:(1) 
Here, and are functions defined for different methods, where collects information from neighboring nodes, and updates the feature vector of each node based on the neighboring information. By performing these message passing operations times, a higherorder feature vector of the node can be obtained.
For a graph classification task, the feature vector for the entire graph is then calculated from each node’s feature vector by taking its summation or mean, for example. By performing the abovementioned operations, feature vectors for each node or each graph can be obtained. Finally, these feature vectors are fed into fully connected layers.
IiB Explainability Methods
There has been considerable amount of research on explaining the decisions of deep learning models. For example, Ribeiro et al. [8]
proposed LIME that can be applied to machine learning models in general, including deep learning models. Furthermore, significant research on explainability methods that are designed for CNN models has been carried out. For example, Simonyan et al.
[11] proposed GradientBased Saliency Maps. This method simply differentiated the output of the model with respect to each pixel and created a heat map. Another explainability method for CNN models is GradCAM [9]. GradCAM used the values of feature maps in CNN models and the differential of the output with respect to them to calculate each pixel’s importance. Then, a heat map was created.Contrary to the explainable models for CNNs, there are fewer works that explain GNN models. For example, Pope et al. [6] extended GradCAM to GNNs and calculated the importance of each node for the output of the GNN model. Note that this approach is designed to explain the contribution of the nodes only. GNNExplainer [14] is an explainability method for GNNs that explains which parts of edges and which dimensions of the node features are responsible for GNN model’s outputs. In addition, there are several approaches of explainability for edges in graphs. Please refer to [15] for a comprehensive survey of this field.
Iii Proposed Methods
In this section, we extend the explainability methods for CNNs to GNNs to predict which edges are important for GNN decisions. We define an important edge as “an edge that contributes to the increase of the GNN model’s output.”
Iiia LIMEBased Method
First, we propose a LIMEbased [8] explainability method for GNNs. In the message passing operation described in the previous section, each node gathers the features of the adjacent nodes. Thus, the edges are the paths through which node features pass. Therefore, we define the operation to multiply node features passing through a certain edge by a weight as “perturbing an edge.” In the original LIME method, each part of the inputs is either removed completely or preserved. The perturbing operation of multiplying information by continuous weight is different from the simple removal operation of the original LIME algorithm.
Let be the number of edges in the input graph ,
be the probability of perturbing each edge, and
be the number of samples. andare both hyperparameters. First,
graphs in which each edge of is perturbed with the probability of are input to the GNN model. Then, the combination of vectors that indicates which edge was perturbed and the output value of the model for are obtained. Here, each dimension of corresponds to each edge ofand shows the weight by which the information passing through each edge is multiplied. A linear regression model is then constructed to predict
from , and the importance of each edge is obtained as the coefficients of the linear regression model. As the linear regression model, we use Lasso [12], which has a regularization term that limits the number of nonzero coefficients. The loss function used for training the Lasso model
is the weighted mean squared error (MSE):(2) 
where represents ’s coefficients, and and are both hyperparameters.
IiiB Saliency MapBased Method
In this section, we propose an extension of Saliency Maps [11] to calculate each edge’s importance. Here, we consider GGNN [3] as the model to be explained for example, and assume the model’s output to be . In GGNN, the message passing operation is represented as follows:
(3) 
where is a learnable matrix. As the edges can be considered as pathways through which the node information propagates, one can assume that passes through the edge connecting the node and the node at the th message passing phase. This operation is performed on all nodes in the graph, so the information that eventually passes through is and . Therefore, we consider the importance of to be the sum of information and . Let the size of be by, the length of be , and the th row component of be . Then, as the importance of , is calculated as follows:
(4) 
IiiC GradCAMBased Method
In this section, we propose an extension of GradCAM [9] to calculate each edge’s importance. Here, we consider the same GGNN [3] model as in the previous subsection for example. As in the Saliency Mapbased method, the importance of can be considered as the sum of the importance of and . In this method, the importance of them is calculated by the method in [6], and then summed. The specific method is described below: feature vectors from to are converted to row vectors. Then, feature matrix are created by stacking them. The element of corresponds to the th row component of . First, the weight for ’s column is calculated as follows:
(5) 
Second, let be a vector whose th row component is . Then, the importance of is obtained by calculating the dot product of and . Finally, the importance of , is calculated by adding the importance of vector and as follows:
(6) 
IiiD Baseline Methods
We compare these methods to three baseline methods: GNNExplainer [14], the removal method, and the random method. In the removal method, edges are removed from the input graph one by one and are input to the model. Then, the importance of the edge is calculated by measuring the extent to which the output value decreases compared to the original value. In the random method, each edge’s importance is determined randomly.
Iv Experiments
Iva Experimental Setup
We use three evaluation tasks: synthetic test, benzene ring test, and removal test. In the synthetic test, we follow the setting in GNNExplainer [14] and use the dataset called BAshapes. This is a node classification dataset that contains a randomly generated graph with 300 nodes and 80 fivenode “house”structured network motifs attached to it. In this dataset each node has no node feature values and nodes in the base graph are labeled with 0; the ones located in the “house” are labeled with 1, 2, or 3. First, a GCN [2]
model is trained to predict each node’s label. If this model predicts that the node located in the “house” has a label other than 0, then the ground truth of the basis of this prediction can be regarded as the “house”structured motif. The evaluation metrics is the percentage of the ground truth edges that are included in the top five edges in terms of importance calculated by each method (i.e., recall rate).
In the second evaluation task, the benzene ring test, we use the QM9 dataset [7] that contains molecule graphs with atoms as nodes and chemical bonds as edges. We trained a GGNN [3] model that performs binary classification if a molecule is aromatic. As a molecule being aromatic is determined only by the presence of a benzene ring in the molecule, we can determine the ground truth of explanation as the five or six edges that form the benzene ring. The evaluation metrics is the percentage of the ground truth edges that are included in the top five or six edges in terms of importance (i.e., recall rate).
The third evaluation task, the removal test, is motivated by the evaluation tasks of explainability methods for CNNs proposed by [5]. First, the edges in the graph are removed from one to five in the order of importance of the edges obtained by each explainability method. Second, these graphs are input to the GNN model to obtain the output values. Subsequently, the number of removed edges is plotted on the horizontal axis, and the decrease in the predicted value compared to the original value is plotted on the vertical axis to obtain the Area Under the Curve (). The larger the , the better the performance of the explainability method. Let be the output of the GNN model when edges are removed. can be calculated by the following equation:
(7) 
We use three datasets for the removal test: Cora
[4], Coauthor [10], and Amazon [10]. Cora is a citation network dataset in which nodes are documents and edges are citation links. Coauthor is a coauthorship network dataset in which nodes are authors and are connected by an edge if they coauthored a paper. Amazon is a copurchase graph dataset in which nodes are products and edges indicate that two goods are frequently bought together. We trained three GCN [2] models that predict each node’s label for these three datasets respectively.IvB Results
Examples of explanation results for the benzene ring test are shown in Fig. 1, which shows that the importance of edges forming benzene rings is relatively high in all methods except for GNNExplainer. In particular, LIME assigns high importance to the benzene ring edges only. LIME can determine the important edges specifically.
Synthetic test (accuracy)  Benzene ring test (accuracy)  Removal test ()  
Task  Node classification  Graph classification  Node classification  
Model  GCN [2]  GGNN [3]  GCN [2]  
Dataset  BAshapes [14]  QM9 [7]  Cora [4]  Coauthor [10]  Amazon [10] 
LIME [8]  0.67  
Saliency Maps [11]  0.91  0.62  1.78  1.14  0.27 
GradCAM [9]  0.20  0.88  0.21  0.03  0 
GNNExplainer [14] (Reproduction)  0.87  0.44  0.57  0.60  0.08 
GNNExplainer [14] (Reported)          
Removal  0.80  0.90  
Random  0.15  0.36  0.18  0.02  0 
Cora [4]  Coauthor [10]  Amazon [10]  

LIME [8]  0.98 s  11.4 s  79.3 s 
Removal  3.63 s  119.8 s  704.6 s 
The results of the synthetic test, benzene ring test, and removal test are shown in Table I. In the synthetic test, GNNExplainer (Reported) yields the best performance, followed by saliency maps. However, in the benzene ring test, LIME shows the best performance followed by the removal method and GradCAM, whereas GNNExplainer is the worst among all methods except for the random method. In the removal test, LIME and the removal method perform the best followed by saliency maps, whereas GNNExplainer and GradCAM are worse than these methods. Although GNNExplainer performs well on the synthetic dataset, it does not perform well on realworld datasets. On the other hand, LIME is generally better than the other algorithms in our experiments.
In the removal test, the performances of LIME and the removal method are comparable, but the removal method requires high computational costs because it removes each edge one by one. Table II shows the average computational costs of LIME and the removal method in the three datasets for the removal test. The computational cost of the removal method is about three to ten times larger than that of LIME. Therefore, LIME is the best explainability method in terms of both performance and computational cost.
IvC Discussion
LIME has the best performance among the three proposed methods in the realworld situations. LIME directly perturb several edges in the input graph, therefore it can take interactions between edges into account. In contrast, GradCAM and saliency maps calculate each edge’s importance independently. This capability of considering the interactions between edges would make the LIME’s score the best.
However, in the synthetic test, GNNExplainer outperforms LIME. In the synthetic dataset, each node has no feature values. This is different from the realworld datasets, where each node has unique feature values. As mentioned in Section III, edges are the paths through which node information passes, but no meaningful information passes through the edges in the synthetic dataset. This absence of meaningful information passing through the edges would make the perturbing operation of LIME less effective.
V Conclusion
In this study, we extended explainability methods for CNNs to GNNs, i.e., LIME, GradCAM, and GradientBased Saliency Maps, to calculate each edge’s importance for the outputs. It was found that the performance of the LIMEbased approach was the best in realworld situations in terms of both accuracy and computational cost.
References
 [1] (2017) Neural message passing for quantum chemistry. In Proceedings of the International Conference on Machine Learning, pp. 1263–1272. Cited by: §IIA.
 [2] (2017) Semisupervised classification with graph convolutional networks. In Proceedings of the International Conference on Learning Representations, Cited by: §IVA, §IVA, TABLE I.
 [3] (2016) Gated graph sequence neural networks. In Proceedings of the International Conference on Learning Representations, Cited by: §IIIB, §IIIC, §IVA, TABLE I.
 [4] (2000) Automating the construction of internet portals with machine learning. Information Retrieval 3 (2), pp. 127–163. Cited by: §IVA, TABLE I, TABLE II.
 [5] (2018) Methods for interpreting and understanding deep neural networks. Digital Signal Processing 73, pp. 1–15. Cited by: §IVA.

[6]
(2019)
Explainability methods for graph convolutional neural networks.
In
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition
, pp. 10772–10781. Cited by: §IIB, §IIIC.  [7] (2014) Quantum chemistry structures and properties of 134 kilo molecules. Scientific data 1 (1), pp. 1–7. Cited by: §IVA, TABLE I.

[8]
(2016)
” Why should i trust you?” explaining the predictions of any classifier
. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144. Cited by: §I, §IIB, §IIIA, TABLE I, TABLE II.  [9] (2017) Gradcam: visual explanations from deep networks via gradientbased localization. In Proceedings of the IEEE International Conference on Computer Cision, pp. 618–626. Cited by: §I, §IIB, §IIIC, TABLE I.
 [10] (2018) Pitfalls of graph neural network evaluation. In Advances in Neural Information Processing Systems, Cited by: §IVA, TABLE I, TABLE II.
 [11] (2014) Deep inside convolutional networks: visualising image classification models and saliency maps. In Proceedings of the International Conference on Learning Representations, Cited by: §I, §IIB, §IIIB, TABLE I.
 [12] (1996) Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society: Series B (Methodological) 58 (1), pp. 267–288. Cited by: §IIIA.
 [13] (2020) A comprehensive survey on graph neural networks. IEEE Transactions on Neural Networks and Learning Systems 32 (1), pp. 4–24. Cited by: §I.
 [14] (2019) Gnnexplainer: generating explanations for graph neural networks. Advances in neural information processing systems 32, pp. 9240–9251. Cited by: §I, §IIB, §IIID, §IVA, TABLE I.
 [15] (2020) Explainability in graph neural networks: a taxonomic survey. CoRR abs/2012.15445. External Links: 2012.15445 Cited by: §IIB.
Comments
There are no comments yet.