Explaining Graph-level Predictions with Communication Structure-Aware Cooperative Games

01/28/2022
by   Shichang Zhang, et al.
0

Explaining predictions made by machine learning models is important and have attracted an increased interest. The Shapley value from cooperative game theory has been proposed as a prime approach to compute feature importances towards predictions, especially for images, text, tabular data, and recently graph neural networks (GNNs) on graphs. In this work, we revisit the appropriateness of the Shapley value for graph explanation, where the task is to identify the most important subgraph and constituent nodes for graph-level predictions. We purport that the Shapley value is a no-ideal choice for graph data because it is by definition not structure-aware. We propose a Graph Structure-aware eXplanation (GStarX) method to leverage the critical graph structure information to improve the explanation. Specifically, we propose a scoring function based on a new structure-aware value from the cooperative game theory called the HN value. When used to score node importance, the HN value utilizes graph structures to attribute cooperation surplus between neighbor nodes, resembling message passing in GNNs, so that node importance scores reflect not only the node feature importance, but also the structural roles. We demonstrate that GstarX produces qualitatively more intuitive explanations, and quantitatively improves over strong baselines on chemical graph property prediction and text graph sentiment classification.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

03/10/2019

GNN Explainer: A Tool for Post-hoc Explanation of Graph Neural Networks

Graph Neural Networks (GNNs) are a powerful tool for machine learning on...
04/18/2021

GraphSVX: Shapley Value Explanations for Graph Neural Networks

Graph Neural Networks (GNNs) achieve significant performance for various...
08/08/2018

L-Shapley and C-Shapley: Efficient Model Interpretation for Structured Data

We study instancewise feature importance scoring as a method for model i...
10/12/2020

PGM-Explainer: Probabilistic Graphical Model Explanations for Graph Neural Networks

In Graph Neural Networks (GNNs), the graph structure is incorporated int...
10/27/2020

Shapley Flow: A Graph-based Approach to Interpreting Model Predictions

Many existing approaches for estimating feature importance are problemat...
04/05/2020

Iterative Context-Aware Graph Inference for Visual Dialog

Visual dialog is a challenging task that requires the comprehension of t...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Explainability is crucial for complex machine learning (ML) models in sensitive applications, helping establish user trust and providing insights for potential improvements. Many efforts focus on explaining models on images, text, and tabular data. In contrast, the explainability of models on graph data is yet underexplored. Since explainability can be especially critical for many graph tasks like drug discovery, and interest in deep graph models is growing rapidly, further investigation of graph explainability is warranted. In this work, we focus on graph neural networks (GNNs) as the target models, given their popularity and widespread use.

In ML explainability, the Shapley value (shapley) has been proposed as a "fair" scoring function for computing feature importance. Originally from the cooperative game theory, many values, including the Shapley value, have been proposed for allocating a total payoff to each player in a game. When used for scoring feature importance of a data instance, the model prediction is treated as the total payoff and the features are considered as players. In particular, for an instance with features , the Shapley value of its th feature is computed via aggregating the marginal contributions of to each possible set of other features, i.e., . The marginal contribution of to is the difference between model outputs by using the feature sets and , respectively. The Shapley value is widely used for images, text, and tabular data, when the features are pixels, words, and attributes (shapley_regression; SHAP).

Very recently, the Shapley value has been extended to explain GNNs on graph data. The idea is to do feature importance scoring as above, where the features are nodes (subgraphx) or a supernode covering a subgraph (subgraphx). We argue that the Shapley value is an unideal choice for node importance scoring for explaining GNNs because its coalition construction is not structure-aware (details in Section 3.3). Using the notation above, when considering interactions between and , the Shapley value by definition assumes no structural relationship between them even when they are part of an existing graph. Given that graph structures generally contain critical information about the prediction task and is the main distinguishing factor between graph and non-graph ML problems, it is crucial to the success of modern GNNs. Therefore, we consider properly leveraging the structure with a better scoring function, resulting in more accurate and intuitive explanations.

Present Work. In light of these considerations, we propose a new Graph Structure-aware eXplanation (GStarX) method, where we construct a structure-aware node importance scoring function based on the HN value (hn_value) from cooperative games with communication structures. Recall that GNNs make predictions via a message passing mechanism, during which node features are updated via collecting messages from their neighbors; and message passing helps aggregate both node feature information and graph structure information, resulting in powerful and structure-aware models for graph-level predictions (gnn_structure_count). The HN value shares a similar idea of message passing by collecting surplus payoffs generated from collaboration between neighboring players, i.e., nodes on graphs (details in Section 5.1). Therefore, when using the HN value as a scoring function to explain node importance, it not only captures interactions between node features like the Shapley value, but also the structural relationship between nodes. As a result, explanations generated from GStarX have higher fidelity. We demonstrate superiority of GStarX over strong baselines for explaining GNNs on chemical graph property prediction, text graph sentiment classification, and synthetic subgraph detection tasks both qualitatively and quantitatively.

2 Related Work

GNN Explanation on Graph Data aims at finding an explanation for GNN predictions on a given graph, usually in terms of a subgraph induced by important nodes or edges. According to taxonomy, many state-of-the-art methods work by scoring nodes or edges and are thus similar to this work. For example, the scoring function of GNNExplainer (gnnexplainer) is the mutual information between a masked graph and the prediction on the original graph, where a soft mask on edges and node features is generated by direct parameter learning. PGExplainer (pgexplainer) uses the same scoring function, but generates a discrete mask on edges by training an edge mask predictor. SubgraphX (subgraphx) uses the Shapley value as scoring function on subgraphs selected by Monte Carlo Tree Search (MCTS), while GraphSVX (graphsvx) uses a least-square approximation of the Shapley value to score nodes and node features. While SubgraphX and GraphSVX were shown to perform better than prior alternatives, as we show in Section 4, the Shapley value they try to approximate is unideal due to not having structure-awareness. Although SubgraphX and GraphSVX use -hop subgraphs and thus technically they use the graph structure, we discuss why this approach has limitations in achieving structure-awareness in Appendix LABEL:app:cutoff. While there are many other GNN explanation methods (cam; lrp; pgm_explainer), we defer their details to Appendix B given their lesser relevance.

Cooperative Game Theory originally studies how to allocate payoffs among a set of players in a cooperative game. Recently, certain ideas from this domain have been successfully used in feature importance scoring for ML model explanation (shapley_regression; shapley_sampling; SHAP). When used for explanation, the players of the game are different for different applications, e.g. pixels for images, words for text, and attributes for tabular data. The value of the game is allocated to the features as importance scores in explaining the model. The vast majority of works in this line, like the ones cited above, deem the Shapley value (shapley) to be the only choice. In fact, there are many other values with different properties and used under different situations in cooperative game theory. However, to the best of our knowledge, only two works from the broader ML community have mentioned these other values, and neither are directly comparable to our case of explaining graph data: core studies the core value for data valuation, while l_and_c mentions the Myerson value (myerson) in the context of proposing a connected Shapley (C-Shapley) value for explaining sequence data. In this work, we follow the cooperative game theory approach to explain models on graph data using a new HN value (hn_value). We show it is a better choice than the Shapley value as it is structure-aware. The C-Shapley value shares a similar idea of structure-awareness, but only works on sequence data. A detailed discussion of the Myerson value and the C-Shapley value can be found in Appendix LABEL:app:myerson_and_c_shapley.

3 Preliminaries

3.1 Graph Neural Networks

Consider a graph with (feature-enriched) nodes and edges , denoted as , where denotes the node set, denotes -dimensional features of nodes in , and denotes the adjacency matrix specifying structural relationships between nodes. GNNs are state-of-the-art (SOTA) ML models for graph predictions. They make predictions by learning graph representations via the message-passing mechanism, where the representation of each node is updated by aggregating messages from its neighbors, denoted as . The message passing is iteratively applied to all nodes in , so each node can collect messages from its multi-hop neighbors and produce structure-aware representations (gnn_structure_count). Given , at the -th iteration, is obtained by collecting the -th representations as messages from its neighbors and aggregating them together with its own -th representation via an operation, e.g. summation.

(1)

3.2 Explanation via Feature Importance Scoring

Feature importance scoring is a general approach for explaining ML models (SHAP; general-scoring1). For instance-level explanation on classification models, the input will be a trained model and a data instance including features and the label

. The output, or so-called explanation, will be a score vector in

defined on the features, or a set of prominent features selected based on the scores. Features here may refer to pixels of images, word of text, attributes of tabular data, or nodes of graphs.

For GNN explanation on graph data, the data instance is a graph . Given a trained GNN that outputs a one-sum vector

containing the probability for

belongs to each class, we consider selecting a subgraph

maximizing a given evaluation metric

Eval as the explanation, i.e.

(2)

where is a subgraph of and is the budget constraint to enforce concise explanation. The Eval should measure the faithfulness of to regarding making predictions with . For example, with as the predicted class of , Eval can measure the prediction difference between and removing from as .

In practice, since the number of subgraphs is combinatorial in the number of nodes, the objective is often relaxed to finding a set of important nodes or edges first, and then inducing the subgraph (gnnexplainer; pgexplainer). A more tractable objective of finding the optimal set of nodes 111A similar objective can be defined as over all edges . We choose to define it over nodes as nodes usually contain richer features than edges and are more flexible. One advantage of this choice will be made clear in Section 6.4 given by

(3)

Several methods differ in how they define Score, and doing so is often non-trivial. For example, a simple example of the Score function may evaluate each node directly as . However, this Score function misses interactions between nodes, and for a GNN , corresponds to a trivial case of no message-passing but only a feed-forward network applied on . Another possible choice is to directly apply Eval as the Score function, for example: . However, this again fails to capture interactions between nodes; for example, two nodes may be important together, and their contribution to can only be observed when they are missing simultaneously. Given this non-triviality in score definition, cooperative games are studied to derive more reasonable Score functions.

Figure 1: Using structure-aware values (like HN) for graph-level explanation offers advantages over non-structure-aware values (like Shapley). (a) Synthetic graph (left): the Shapley value assigns weights to coalitions based only on their size, while the HN value gives 0 weight to disconnected nodes. (b) Text graph (middle):

for a sentence classified as positive, the "not" and "good" coalition shouldn’t be considered when they are not connected by "bad".

(c) Chemical graph (right): for a chemical graph with mutagenic functional group -NO2, the importance of the atom N (node 0) is better recognized if decided locally within the functional group.

3.3 Cooperative Games

A Cooperative Game denoted by , is defined by a set of players , and a characteristic function . takes a subset of players , called a coalition, and maps it to a payoff , where . A solution function is a function defined on all cooperative games and maps each given game to . The vector , called a solution, represents a certain allocation of the total payoff generated by all players to each individual, with the th coordinate being the payoff attributed to player . is also called as the “value” of the game when it satisfies certain axioms, and different values were proposed to name solutions with different properties (shapley; core_econ).

The Shapley Value is one popular solution of a cooperative game. The main idea is to assign each player a “fair” share of the total payoff by considering all possible player interactions. For example, when player cooperates with a coalition of players , the total payoff may be very different from because of ’s interaction with players in S. We thus define the marginal contribution of a player to coalition by . Then the formula of the Shapley value for player is shown in Equation 5, where marginal contributions to all possible coalitions are aggregated. We can see from Equation 4

that the aggregation weights are first uniformly distributed among coalition sizes

(outer average), then uniformly distributed among all coalitions with the same size (inner average).

(4)
(5)

Games with Communication Structures. Although the Shapley value is widely used for many cooperative games, its assumption of fully flexible cooperation between all players may not always be achievable. In particular, some coalitions may be preferred over others and some coalitions may even be impossible due to limited communication among players. Thus, myerson uses a graph as the communication structure between players to determine how likely they will cooperate with each other. A game with a communication structure is defined by a triple , with being ’s node set, and it corresponds to a more practical situation where cooperation preference is available from some prior knowledge. Several values with different properties regarding games with communication have been proposed (myerson; position_value; hamiache_value; d_myerson), including the HN value (hn_value) used in our work.

4 Motivation

Our goal is to propose a proper Score function for solving the explanation problem in Equation 3. As we discussed in 3.2, this is a non-trivial task which requires the Score function to leverage (sub)graph-level information. The Shapley value, which assigns a score to each by considering its interaction with other nodes , has been proposed to serve the role (subgraphx; graphsvx). However, it is defined on games in the space and has no structure-awareness. In contrast, the values defined on games with communication structures are naturally structure-aware, as they utilize the graph.

The Shapley value is defined on a game and assumes flexible cooperation between players and a rather uniform distribution of coalition importance. Notice that its solution function only depends on the size of (see discussion about Equation 4 in Section 3.3, regarding the aggregation weights of ), and is totally irrespective to any . If a graph is given and the game is defined by , the Shapley value weights will simply overlook it. Similar to the Shapley value, structure-aware values on can also be interpreted as a weighted aggregation of different coalitions, but with more reasonable weights. Although different solutions have their own nuances in weight adjustments (hamiache_value; hn_value), they share two key properties: (1) the weight is 0 if and are disconnected because they are interpreted as players without communication channels (myerson), and (2) the weight is impacted by the nature of connections between and because the communication between close nodes and far away nodes can cost differently. We give several examples demonstrating the challenges of the Shapley value and motivating structure-aware values.

Toy Example. We take the HN value (formal definition in Section 5.1) as an example and compare its aggregation weights to the Shapley value in a toy graph in Figure 1(a). When computing the value of node 0, i.e. , both values can be interpreted as aggregating marginal contributions of node 0 to 4 coalitions in rows of example (a). The Shapley value assigns weights first uniformly to coalitions with size 0, 1, and 2 to be each, and then splits weights uniformly for the size 1 group to be each. On the other hand, the HN value assigns weight 0 to marginal contribution because 0 and 2 are disconnected in coalition and are assumed to be two independent graphs that shouldn’t interact (property (1)). The interaction between node 0 and 2 is rather captured in the case, when 0 and 2 are connected by the bridging node 1. This case is also downweighted from to in the HN value, as node 2 is relatively far from 0 (property (2)).

Practical Example. The good properties of structure-aware values can also help explain graph tasks. The example in Figure 1(b) is from GraphSST2 (dataset description in Section 6.1), where the graph for sentiment classification is constructed from the sentence “is still quite good-natured and not a bad way to spend an hour” with edges generated by the Biaffine parser (parser). Assuming a model can correctly classify it as positive. Intuitively, “good” and “not a bad” are central to the human explanation. In a Shapley value context, to evaluate the importance of the word “good”, the coalition “not good” will contribute negatively and diminish the positive importance score of “good”, despite the two words lacking any direct connection. A structure-aware context can instead eliminate the “not good” coalition, and only consider interactions between “not” and “good” (in fact, “not” and any other word) when the bridging “bad” appears, hence better binding “not” with “bad” and improving the salience of “good.”

5 GStarX: Graph Structure-Aware eXplanation

We propose GStarX, which uses structure-aware Score function to better explain GNNs for graph predictions. We use the HN value in particular because its surplus allocation mechanism resembles the GNN message passing. In this section, we first introduce the HN-value in cooperative game theory (Section 5.1), and then connect it to the GNN message passing (Section 5.2), and finally introduce GStarX using the HN-value-based Score function (Section 5.3).

5.1 The HN Value

For any game with a communication structure and being a coalition, we use to denote the union of S and its direct neighbors in , and we use to denote the node partition of , which is determined by the connected components of the induced subgraph of in (denoted as ). For example, in Figure 1(b), when {“is”, “an”, “hour”}, will be {“is”, “good”, “an”, “hour”, “spend”}, and will be {{“is”}, {“an”, “hour”}}. The HN value is a solution on . It is computed by iteratively constructing a series of new games, called the associated games (Appendix LABEL:app:associated_game), which utilize the graph structure to allocate cooperation surplus. In particular, a coalition may cooperate with its neighbor to create some surplus given by

(6)

In the HN associated game, portion of such surplus is allocated to and added to its payoff in the original game.

Definition 5.1 (HN Associated Game).

The HN associated game of is defined as

if (7)
otherwise (8)

The HN value is computed by iteratively constructing a series of associated games until it converges to a limit game . In other words, we first construct from by surplus allocation. Then we construct from by allocating the surplus generated from the payoffs and so on. The convergence of the limit game is guaranteed and the result is independent from under mild conditions (hn_value). The HN value of each player is uniquely determined by applying to that player, i.e. . We state the formal definitions of the limit game and the uniqueness theorem of the HN value in Appendix LABEL:app:hn_compute.

5.2 Connecting GNNs and HN Surplus Allocation through the Message Passing Lens

Both the GNN message passing (Equation 1) and the associated game surplus allocation (Equation 7) are iterative aggregation algorithms, with considerable alignment. In fact, the surplus allocation on each singular node set is exactly a message passing operation: Equation 7 becomes an instantiation of Equation 1 with on a scalar node value and a neighbor set . The algorithms differ in that HN surplus allocation applies more broadly to cases where ; it treats as a supernode when nodes in form a connected component in , and handles disconnected component-wise via Equation 8.

We illustrate the surplus allocation using a real chemical graph example. The molecule shown in Figure 1(c) is taken from MUTAG (dataset description in Section 6.1). It is known to be classified as mutagenic because of the -NO2 group (nodes 0, 1, and 2 in the diagram) (mutag). When we compute , the surplus , , and are passed to node 0 as messages. Then, they are aggregated together with following Equation 7 to form . For graph data, the surplus allocation approach has two advantages over the all-possible-coalition aggregation used in the Shapley value: (1): the aggregated payoff in each is structure-aware like the representations learned by GNNs (gnn_structure_count), and (2) the iterative computation preserves locality, which is a known property for graph data (graph_locality). In other words, close neighbors heavily influence each other due to cooperation in many iterations, while far away nodes have lesser influence on each other due to fewer cooperations. In the this example, since the local -NO2 piece can generate high payoff for the mutagenicity classification, locally allocating this payoff helps us better understand the importance of the nitrogen atom and the oxygen atoms. Whereas aggregating over many unnecessary coalitions with far-away carbon atoms can obscure the true contribution of these nodes. We will revisit this examples in Section 6.4.

5.3 GStarX via HN Value Scoring

Structure-Aware HN-Value Score Function. In Section 3.2, we introduced the general idea of feature importance scoring and formulated the graph explanation objective as scoring nodes first, and then finding the optimal node-induced subgraph in Equation 3. We now propose our HN-value-based Score function for the objective. To use the HN value as the Score, we need to define the players and the characteristic function of the game, and then apply the HN value formula in Equation 7 and 8. Suppose the inputs include a graph with nodes and label , a GNN outputs a probability vector , and the predicted class . We let be players, and let the normalized probability of the predicted class be the characteristic function :

(9)

Here denotes the subgraph induced by nodes in . The normalization term

is the expectation over a random variable

representing the general graph. In practice, we approximate it using the empirical expectation over all in the dataset. The function Score will thus be the HN value of the game, i.e. .

  Input: Graph instance with nodes , trained model , empirical expectation

, hyperparameter

, maximum sample size , number of samples , threshold
  Get the predicted class
  Define the characteristic function
  if  then
     
  else
     
  end if
  Normalize
  Sort in descending order with indices
  Pick the smallest such that
  Return:
Algorithm 1 GStarX: Graph Structure-Aware Explanation

GStarX Explanation Generation. With the HN value as the Score function, we state our main algorithm for explaining graph predictions with GStarX. The goal is to solve for the objective in Equation 3, which can be done by first computing the scores then selecting the top scores. In practice, however, to apply this general algorithm to a dataset of graphs with different sizes, a fixed subgraph size budget as in Equation 3 is not the best choice. We thus choose to use an adaptive threshold for each graph. To do so, we first normalize the scores to sum to one, i.e. . Then, we pick the smallest subset with total score surpassing a threshold :

The full GStarX algorithm for finding is shown in Algorithm 1. Practically, like other game-theoretic approaches, the exact computation of the HN value is infeasible when the number of players is large (see Appendix LABEL:app:hn_compute). We thus do exact computation for small graphs (the if-branch) and Monte-Carlo sampling for large graphs (the else-branch). See Appendix LABEL:app:hn_compute for detailed algorithms for Compute-HN-Values and Compute-HN-Values-MC.

6 Experimental Evaluation

In this section, we conduct experiments to evaluate GStarX on graph datasets from different domains. We analyze the generated explanations both quantitatively and qualitatively and compare to other strong explanation methods.

6.1 Datasets

We evaluate our proposed method on the following datasets including synthetic data, chemical data, and text data. A brief description of the datasets is shown below, and more detailed dataset statistics can be found in Appendix A.1

  • Chemical graph property prediction. We use MUTAG (mutag), BACE and BBBP (bbbp). All 3 datasets contain molecules, and the task is graph classification with chemical properties as labels. Each graph is a molecule (nodes are atoms, edges are bonds).

  • Text graph sentiment classification. GraphSST2 and Twitter (taxonomy) contain graphs constructed from text. Nodes denote words in a sentence with pre-trained BERT embeddings as node features. Edges denote relationship between words generated by the Biaffine parser (parser). Each graph has a binary sentiment label being positive or negative.

  • Synthetic graph motif detection. BA2Motifs (pgexplainer) is a synthetic dataset for graph classification. Each graph includes a Barabasi-Albert (BA) graph of size 20 and one of two 5-node motifs: a house-like structure or a cycle. Node features are 10-dimensional all-1 vectors, and graphs are labeled based on which motif appears.

Figure 2: Explanations on sentences from GraphSST2. We show the explanation of one positive sentence (upper) and one negative sentence (lower). Red outlines indicate the selected subgraph explanation. GStarX identifies the sentiment words accurately compared to baselines. See Section 6.4 for a detailed qualitative discussion of these explanations.

6.2 Experiment Settings

GNN Architecture and Explanation Baselines. We evaluate GStarX by explaining a standard GCN (gcn) on all datasets in our major experiment. In later analysis, we also evaluate on GIN (gin) and GAT (gat) on certain datasets following subgraphx. All models are trained to convergence. The model hyperparameters and performance are shown in Appendix A.2. We compare with 4 strong baselines representing the state-of-the-art methods for GNN explanation: GNNExplainer (gnnexplainer), PGExplainer (pgexplainer), SubgraphX (subgraphx), and GraphSVX (graphsvx). In particular, both SubgraphX and GraphSVX use Shapley-value-based scoring functions.

Evaluation Metrics. Evaluating explanation is non-trivial due to the lack of ground truth. We follow subgraphx; taxonomy to employ Fidelity, Inverse Fidelity (Inv-Fidelity), and Sparsity as our evaluation metrics. Fidelity and Inv-Fidelity measure whether the prediction is faithfully important to the model prediction by removing the selected nodes or only keeping the selected nodes respectively. Sparsity promotes fair comparison, since including more nodes generally improves Fidelity, and explanations with different sizes are not directly comparable. Ideal explanations are high Fidelity, low Inv-Fidelity, and high Sparsity, indicating relevance and conciseness. Equations 10-12 show their formulas.

(10)
(11)
(12)

Fidelity or Inv-Fidelity may be used to effectively compare explanations. However, controlling explanations to have same Sparsity is non-trivial. We thus simplify these metrics to a single-scalar-metric “harmonic fidelity” (abbrv. H-Fidelity) summarizing the three: In particular, we normalize the Fidelity and Inv-Fidelity by Sparsity

, rescale them and take their harmonic mean; see Appendix

A.3 for the formula.

GStarX Hyperparameters. GStarX includes three hyperparameters: for the associated game, maximum sample size , and number of samples . In our experiments we choose since we need for convergence (Appendix LABEL:app:limit_game) and all graphs have less than 200 nodes. Ideally, bigger and should be better for the MC approximation. Empirically, we found and work well.

6.3 Quantitative Results

We report averaged test set H-Fidelity in Table 1. For each method, we conduct 8 different runs to get results with Sparsity ranging from 0.5-0.85 in 0.05 increments (Sparsity cannot be precisely guaranteed, hence it has minor variations across methods) and report the best H-Fidelity for each method. GStarX outperforms others on 4/6 datasets and has the highest average. We also follow subgraphx to show the Fidelity vs. Sparsity plots for all 8 sparsity in Appendix A.4.

Dataset GNNExp PGExp SubgraphX GraphSVX GStarX
BA2Motifs 0.4841 0.4879 0.6050 0.6158 0.5824
BACE 0.5016 0.5127 0.5519 0.5090 0.5934
BBBP 0.4735 0.4750 0.5610 0.5219 0.5227
GraphSST2 0.4845 0.5196 0.5487 0.4766 0.5519
MUTAG 0.4745 0.4714 0.5253 0.4548 0.6171
Twitter 0.4838 0.4938 0.5494 0.4818 0.5716
Average 0.4837 0.4934 0.5569 0.5100 0.5732
Table 1: The best H-Fidelity (higher is better) of 8 different Sparsity for each dataset. GStarX shows higher average H-Fidelity and performs better on 4/6 datasets.

6.4 Qualitative Studies

Figure 3: Explanations on a mutagenic molecule in MUTAG. Carbon atoms are in yellow, nitrogen atoms are in blue, and oxygen atoms are in red. Dark outlines indicate the selected subgraph explanation. We also report the Fidelity (fide), Inv-Fidelity (inv-fide), and H-Fidelity (h-fide) of each explanation. GStarX gives a significant better explanation than other methods in terms of these metrics. See Section 6.4 for a detailed qualitative discussion of these explanations.

We visualize the explanations from GraphSST2 in Figure 2 to compare them qualitatively. We show two examples including a positive (upper) and a negative (lower) sentence. Explanations are selected with high and comparable Sparsity. We see that for both sentences, GStarX concisely chooses the important words for sentiment classification without including extraneous ones. GNNExplainer and PGExplainer choose some but not all important words, with extra neutral words appearing in the explanations as well. SubgraphX gives reasonable results, but because it can only select a connected subgraph as the explanation, it cannot cover two groups of important nodes with limited budget; e.g. to cover the negative word “lameness” in the second sentence, SubgraphX needs at least 3 more nodes along the way, which will significantly decrease Sparsity while including undesirable, neutral words.

We visualize the explanations of a mutagenic molecule from MUTAG in Figure 3 for a qualitative comparison. Explanations are selected with high and comparable Sparsity. In general, explanations on chemical graphs are harder to evaluate than text graphs as they require domain knowledge. MUTAG has been widely used as a benchmark for evaluating GNN explanation methods because human experts have shown that -NO2 is mutagenic (mutag) and can be treated as the ground truth for measuring GNN explanations222Carbon rings have also been claimed as mutagenic by human expert, but we found it is not discriminative as there is a carbon ring in every molecule graphs in MUTAG.. Surprisingly, we found that GStarX only selects the oxygen atoms from -NO2 as explanations, and the explanation H-Fidelity much better than other methods. Moreover, the -0.234 Inv-Fidelity of GStarX means the selected subgraph has a even better prediction result than the original whole graph because nodes not significant to the GNN prediction are removed. This suggests that even though human experts identify -NO2 as the source of mutagenicity, the GNN actually classifies a molecule as mutagenic when it sees many and only oxygen atoms. Other methods, despite being able to capture -NO2 to some extent, their fidelity metrics are inferior to GStarX. SubgraphX shows the best H-Fidelity among these baselines. However, it can only capture one -NO2 because its search algorithm requires the explanation to be connected, and its 0.526 H-Fidelity is significantly lower than GStarX. Other baselines, likewise can only partially cover one or two -NO2 under limited sparsity budget, but they tend to include other non-discriminative carbon atoms, so their H-Fidelity are even lower. In fact, GNNExplainer, PGExplainer, and SubgraphX can never generate explanations including only disconnected oxygen atoms but not nitrogen atoms like GStarX, because the former two solve the explanation problem by optimizing edges (as opposed to Equation 3), and the latter requires connectedness. We give more explanation visualizations in Appendix LABEL:app:visualization.

6.5 Model-Agnostic Explanation

GStarX makes no assumptions about the model architecture and can be applied to various GNN backbones. We use GCN for all datasets in our major experiment for consistency, and now further investigate performance on two other popular GNNs: GIN and GAT. We follow subgraphx to train GIN on MUTAG and GAT on GraphSST2333Since the full evaluation for GraphSST2 can take hours for some baselines, in this analysis we randomly select 30 graphs., and show results in Table 2. On both settings, GStarX outperforms the baselines, which is consistent with results on GCN.

6.6 Efficiency Study

We follow subgraphx to study the efficiency of GStarX by explaining 50 randomly selected graphs from BBBP. We report the average run time in Table 3. Our result on the baselines is similar to subgraphx. GStarX is not as fast as GNNExplainer, PGExplainer, and GraphSVX, but it is about more than two times faster than SubgraphX. Since explanation usually doesn’t have strict efficiency requirements in real applications, considering GStarX generates higher-quality explanations than the baselines, we believe the time complexity of GStarX is acceptable.

Dataset GNNExp PGExp SubgraphX GraphSVX GStarX
GraphSST2 0.4951 0.4918 0.5484 0.4937 0.5542
MUTAG 0.5042 0.4993 0.5264 0.5572 0.6064
Table 2: The best H-Fidelity (higher is better) of 8 different Sparsity. GStarX shows higher H-Fidelity for both GAT on GraphSST2 and GIN on MUTAG.
Method GNNExp PGExplainer SubgraphX GraphSVX GStarX
Time(s) 11.92 0.3 (train 720s) 75.96 3.06 31.24
Table 3: Average running time on 50 graphs in BBBP

7 Conclusion and Future Work

In summary, we study GNN explanation on graph data via feature importance scoring. In particular, we identify the challenges of existing Shapley-value-based approaches and propose GStarX to assign importance scores to each node via a structure-aware HN value, and then select the node-induced subgraph as the explanation. We demonstrate the superior performance of GStarX over strong baselines on chemical graph property prediction and text graph sentiment classification.

As there is a rich literature of cooperative game theory beyond the Shapley value, more values are possible for explaining ML models. For graph data, edge based values can potentially be applied to an alternative edge-based objective like Equation 3. Other values may be appropriate to more data types beyond graphs. We leave these as future work.

References

Appendix A Experiment Details

a.1 Datasets

In Table 4, we provided the basic statistics of datasets used in our experiments.

Dataset # Graphs # Test Graphs # Nodes (avg) # Edges (avg) # Features # Classes
MUTAG 188 20 17.93 19.79 7 2
BACE 1,513 152 34.01 73.72 9 2
BBBP 2,039 200 24.06 25.95 9 2
GraphSST2 70,042 1821 9.20 10.19 768 2
Twitter 6,940 692 21.10 40.20 768 3
BA2Motifs 1,000 100 25 25.48 10 2
Table 4: Dataset Statistics.

a.2 Model Architectures and Implementation

In Table 5, we provided the hyperparameters and test accuracy for the GCN model used in our major experiments. In Table 2, we provided the hyperparameters and test accuracy for the GIN and GAT model used in our analysis experiment. Most parameters are following subgraphx, with small changes to further boost the test accuracy.

We run all experiments on a machine with 80 Intel(R) Xeon(R) E5-2698 v4 @ 2.20GHz CPUs, and a single NVIDIA V100 GPU with 16GB RAM. Our implementations are based on Python 3.8.10, PyTorch 1.10.0, PyTorch-Geometric 1.7.1

(pyg), and DIG (dig). We adapt the GNN implementation and most baseline explainer implementation from the DIG library, except GraphSVX where we use the official implementation. For the baseline hyperparameters, we closely follow the setting in subgraphx and graphsvx for a fair comparison. Please refer to subgraphx Section 4.1 and graphsvx Appendix E for details.

Dataset #Layers #Hidden Pool Test Acc
BA2Motifs 3 20 mean 0.9800
BACE 3 128 max 0.8026
BBBP 3 128 max 0.8634
MUTAG 3 128 mean 0.8500
GraphSST2 3 128 max 0.8808
Twitter 3 128 max 0.6908
Table 5: GCN architecture hyperparameters according to results in Table 5
Dataset #Layers #Hidden Pool Test Acc
GraphSST2(GAT) 3 10 ×10 max 0.8814
MUTAG(GIN) 3 128 max 1.0
Table 6: GIN and GAT architecture hyperparameters according to results in Table 2. For GAT, we use 10 attention heads with 10 dimension each, and thus 100 hidden dimensions.

a.3 Exact Formula for Evaluation Metrics

We showed the formula of Fidelity, Inv-Fidelity, and Sparsity in Equation 10, 11, and 12. In Equation 13, 14, and A.3, we show the formula for normalized fidelity (N-Fidelity), normalized inverse fidelity ( N-Inv-Fidelity), and harmonic fidelity (H-Fidelity). Both the N-Fidelity and N-Inv-Fidelity are in . The H-Fidelity flips N-Inv-Fidelity, rescales both values to be in , and takes their harmonic mean.

(13)
(14)

Let ,

(15)

a.4 Fidelity vs. Sparsity Plots

In Table 1, we reported the best H-Fidelity among 8 different sparsities for each method. We also follow subgraphx to show the Fidelity vs. Sparsity plots in Figure 4. Note that GraphSVX tends to give sparse explanations on some datasets, we still pick 8 different sparsities for it but mostly on the higher end.

Figure 4: Fidelity (row1), 1 - Inv-Fidelity (row2), and H-Fidelity (row3) vs. Sparsity on all datasets corresponding to the results shown in Table 1. All three metrics are the higher the better. We see that GStarX outperforms the other methods

Appendix B More Related Work

GNN Explanation Continued

Besides the perturbation-based method we mentioned in Section 2, there are several other type of approaches for GNN explanation. Gradient-based methods are widely used for explaining ML models on images and text. The key idea is to use the gradients as the approximations of input importance. Such methods like contrastive gradient-based (CG) saliency maps, Class Activation Mapping (CAM), and gradient-weighted CAM (Grad-CAM) have been generalized to graph data in (cam)

. Decomposition based methods are a popular way to explain deep NNs for images. They measure the importance of input features by decomposing the model predictions and regard the decomposed terms as importance scores. Decomposition methods including Layer-wise Relevance Propagation (LRP) and Excitation Backpropagation (EB) have also been extended to graphs

(cam; lrp). Surrogate-based methods work by approximating a complex model using an explainable model locally. Possible options to approximate GNNs include linear model as in GraphLIME (graphlime), additive feature attribution model with the Shapley value as in GraphSVX (graphsvx), and Bayersian networks as in (pgm_explainer). A comprehensive survey can be found in (taxonomy).

Appendix C Properties of The Shapley Value

The Shapley value was proposed as the unique solution of a game that satisfies three properties shown below, i.e. efficiency, symmetry, and additivity (shapley). These three properties together are referred as an axiomatic characterization of the Shapley value. The associated consistency properties introduced in Section 5.1 provides a different axiomatic characterization.

Property C.1 (Efficiency).