RelEx: A Model-Agnostic Relational Model Explainer

05/30/2020
by   Yue Zhang, et al.
Binghamton University
16

In recent years, considerable progress has been made on improving the interpretability of machine learning models. This is essential, as complex deep learning models with millions of parameters produce state of the art results, but it can be nearly impossible to explain their predictions. While various explainability techniques have achieved impressive results, nearly all of them assume each data instance to be independent and identically distributed (iid). This excludes relational models, such as Statistical Relational Learning (SRL), and the recently popular Graph Neural Networks (GNNs), resulting in few options to explain them. While there does exist one work on explaining GNNs, GNN-Explainer, they assume access to the gradients of the model to learn explanations, which is restrictive in terms of its applicability across non-differentiable relational models and practicality. In this work, we develop RelEx, a model-agnostic relational explainer to explain black-box relational models with only access to the outputs of the black-box. RelEx is able to explain any relational model, including SRL models and GNNs. We compare RelEx to the state-of-the-art relational explainer, GNN-Explainer, and relational extensions of iid explanation models and show that RelEx achieves comparable or better performance, while remaining model-agnostic.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

12/16/2019

Adversarial Model Extraction on Graph Neural Networks

Along with the advent of deep neural networks came various methods of ex...
02/16/2022

Task-Agnostic Graph Explanations

Graph Neural Networks (GNNs) have emerged as powerful tools to encode gr...
01/17/2021

Membership Inference Attack on Graph Neural Networks

Graph Neural Networks (GNNs), which generalize traditional deep neural n...
07/13/2020

Beyond Graph Neural Networks with Lifted Relational Neural Networks

We demonstrate a declarative differentiable programming framework based ...
07/15/2021

Algorithmic Concept-based Explainable Reasoning

Recent research on graph neural network (GNN) models successfully applie...
11/30/2020

TimeSHAP: Explaining Recurrent Models through Sequence Perturbations

Recurrent neural networks are a standard building block in numerous mach...
09/20/2021

A Meta-Learning Approach for Training Explainable Graph Neural Networks

In this paper, we investigate the degree of explainability of graph neur...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In the last decade, significant attention has been directed toward accurately modeling non-Euclidean, graph-structured data. Relational Models include Statistical relational learning (SRL) methods srl2007 , stochastic blockmodels, and the more recently developed Graph Neural Networks (GNNs) scarselli2008graph

. These relational models can be applied for a variety of tasks dealing with structured data, e.g. molecule classification, knowledge graph completion, and recommendation systems.

Along with relational models, progress has also been made in explaining the predictions of black-box models. We use the term black-box models to refer to models whose predictions are not inherently interpretable. For example, the pixels that are instrumental in a prediction is not often apparent for state-of-the-art deep neural network models for many vision tasks goodfellow2014explaining . This has led to the emergence of models aimed at explaining the predictions of complex underlying models, however most underlying approaches are designed to only work on independent and identically distributed (iid) data. There are mainly two groups of these iid model explainers. The first group finds important data points, which have high influence on learnt model behavior, including influence function (IF) koh2017understanding , and representer points yeh2018representer . The second group finds feature attributes that are most influential to the final model decision sundararajan2017axiomatic ; smilkov2017smoothgrad ; ribeiro2016should ; ribeiro2018anchors . However, explaining relational data and relational models is significantly more challenging as it involves learning the right relational structure around the node of interest that explains the prediction. There is limited existing work on explaining relational data and relational models, in fact, to our knowledge, there is only one such technique GNN-Explainer ying2019gnn , which has been designed for explaining models that consider dependencies amongst the data samples. GNN-Explainer learns the most important neighbor nodes and links corresponding to why a GNN predicts some nodes as a specific class. This explanation is learned as masks over the adjacency and feature matrices, which are optimized by utilizing the gradients of the underlying GNN model.

In this paper, we present RelEx, a model-agnostic relational model explainer that learns relational explanations by treating the underlying model as a black-box model. We construct explanations first by learning a local differentiable approximation of the black-box model for some node of interest, trained over the perturbation space of this node. We then learn an interpretable mask over the local approximation.

Specifically, our contributions are as follows: i) We develop RelEx, which learns model-agnostic relational explanations for the task of node classification, with only access to the output prediction of the black-box model for a specific input. Hence, RelEx can be applied to any relational model, from non-differentiable statistical relational models to various GNNs. ii) RelEx can learn diverse explanations for each data instance by maximizing the cross-entropy between two learned relational explanations. This provides end users with the much-needed flexibility of choosing an explanation that is more appealing from a domain perspective, while remaining true to the underlying black-box model. iii) We perform experiments on both synthetic and real-world datasets, comparing our relational explanations to the correct ground-truth relational structures (we refer to them as right reasons ross2017right ). We demonstrate that our approach is comparable to or better than the state-of-the-art relational explainer, GNN-Explainer, and relational extensions of other state-of-the-art explainers in quantitative performance across all datasets, despite needing less information about the black-box model than these approaches. We also illustrate the capability of RelEx to capture the core topological structures in the explanations in different classification tasks through qualitative results across all the datasets.

Thus, RelEx is model-agnostic and practically more feasible than existing approaches to explaining relational models. To the best of our knowledge, ours is the first general-purpose model-agnostic relational explainer.

2 Related Work

In this work, we specifically focus on explaining two broad types of relational models: statistical relational learning, and graph neural networks. Since the primary focus of this work is explaining relational models, we only discuss the different relational models briefly before discussing explanation approaches.

Statistical relational learning (SRL) srl2007 is concerned with domain models that exhibit both uncertainty and complex, relational structure, where a user handcrafts first-order logic rules to capture dependencies and reasoning. For example, the collective rule: , Spouse(B,A) Votes(A,C) Votes(B,C)

captures the increased probability of a spouse to vote for the same candidate in an election, as determined by the target variable

Votes and observed variable Spouse dependencies, and is the rule weight. Hinge-Loss Markov Random Fields (HL-MRFs) bach2017hinge is an example of an SRL model which uses declarative language Probabilistic Soft Logic (PSL) to express probabilistic logic rules. The input of HL-MRFs is a knowledge graph , where is a set of entities, is a set of relations, and is a set of observed relational data of the format , where and are entities and is a binary relation between entities. Example of such data tuple is Spouse(Alice, Bob). For each relation , we can represent our knowledge using adjacency matrix , such that its entry is 1 if and only if is in the knowledge graph. We choose HL-MRFs as a representative SRL model to evaluate our relational explainer.

GNNs are another more recent technique for modeling relational data, with a variety of popular architectures kipf2017semi ; xu2018how ; velickovic2018graph

. Given an adjacency matrix defining the relations among nodes, and a feature matrix describing the attributes of each node, a GNN will learn low-dimensional node representations, similar to classical feed forward neural networks. Representations are initialized to default node features, and are updated in each layer through degree normalized aggregations of its neighbor’s node representations. After training, these node representations capture the task relevant feature and structural information, and can then be used for a variety of machine learning tasks. In this paper, we evaluate

RelEx for explaining GNNs for the task of node classification.

Post-hoc explainers such as LIME ribeiro2016should have been developed to learn an interpretable local approximation of a black-box, to explain single instances. Our relational explainer RelEx is motivated by this approach. Anchors ribeiro2018anchors were subsequently developed to make clear where LIME explanations apply. These approaches are model-agnostic, meaning they work regardless of how the black-box model works (so long as it’s a model on iid data). Other explanation techniques involve using input gradients to learn feature importances. This includes SmoothGradsmilkov2017smoothgrad , and Integrate Gradientsundararajan2017axiomatic , where the basic idea behind them is to learn the saliency map by calculating gradient on input , however, they are designed specifically for image data.

There has been limited effort to improve the interpretability of deep learning based methods on graphs. Graph Attention Networks is an architecture which learns attention weights on each edge, which can be interpreted as an importance score velickovic2018graph . Another approach simplifies Graph Convolutional Networks (GCNs) by removing the nonlinear function applied to each layer pmlr-v97-wu19e

. It is shown that for many cases, performance is the same while the underlying classifier is equivalent to logistic regression and thus more interpretable. Another work attempts to disentangle node representations by capturing the latent factors in their neighborhoods

ma2019disentangled . While this can improve robustness and interpretability, GNN-Explainer ying2019gnn is the only post-hoc approach to provide explanations for particular node predictions.

Our approach addresses the following caveats in existing work. First, our approach only needs access to the output predictions of the black-box model, not the gradients. Thus, in contrast to GNN-Explainer, our approach is capable of explaining any relational model, including non-differentiable HL-MRFs. Our approach also shines from a practical usability perspective, as some popular developed models Fey/Lenssen/2019 use indices of existing edges rather than adjacency matrices as input.

3 RelEx: Learning-based Relational Explainer

In this section, we develop our model-agnostic, learning-based relational explainer, RelEx, for explaining predictions of relational models. RelEx learns which nodes and edges in the neighborhood of the node of interest are most influential in the black-box prediction. The output of RelEx is a neighborhood relational structure that is instrumental in the black-box prediction. Figure 1 gives the overall architecture of RelEx and identifies the different components and their notations, which we use in the equations in this paper.

Figure 1: RelEx architecture showing the different components and the notations associated with them.

3.1 RelEx Problem Formulation

Let node be the node we want to explain. We denote the n-hop neighborhood of (i.e., the computation graph) by the adjacency matrix , which contains a total of nodes and edges. The features of these neighborhood nodes are stored in the feature matrix . We want to learn an explanation, where given some black-box relational model , we learn the most salient nodes and links in that are instrumental in ’s prediction of . We refer to the predicted class of as .

is represented as a one-hot vector.

After defining the problem setting, we can see that the problem of learning a relational explanation for node entails selecting nodes and edges from , the computation graph of . Hence, to select the salient graph structures, we learn a sparse mask over . Our relational explainer RelEx is denoted by , which is a sparse adjacency matrix consisting of the nodes and edges crucial to the explanation.

is learned by optimizing for the below loss function,

(1)

where the loss function L is any distance measure between our class of interest

and the probability distribution over classes predicted by the underlying black-box model

on the de-noised computation graph. Common choices for L are negative log-likelihood and KL-divergence. We represent the explanation as , where denotes element-wise multiplication, and is the mask we are optimizing for. is a sparseness measure on the mask , which could be norm or a group sparseness measure like norm he2016adaptive . We discuss with greater detail in the sections to follow. As finding the optimal solution to Equation 1 is a combinatorial problem, a brute-force approach has the time complexity

. There are a variety of heuristic search based solutions to this optimization problem, including multi-armed bandit

ribeiro2018anchors

, reinforcement learning methods

zhang2019learning .

3.2 RelEx Architecture

Here, we present RelEx, a more effective and efficient learning-based solution for explaining relational models. We first provide a general overview of our approach and then expand on the different components in the following paragraphs. To design RelEx, our first goal is to learn a local approximator of at the node whose prediction we are interested in explaining. We call this approximator , where is a perturbation on the computation graph of . Since is a relational graph model, we choose to be a naive Graph Convolutional Network (GCN), owing to its powerful fitting and representation ability, while simultaneously being easy to use. Specifically, we use a residual architecture dehmamy2019understanding , where we concatenate the output of every GCN layer to the final output of the network. Residual architecture increases the representation power of GCN in learning graph topology without stacking more layers or adding more parameters.

To learn , we first follow a sampling strategy to get perturbed computation graphs . We then query on all samples of to get dataset , where , is the number of samples, and . In contrast to LIME ribeiro2016should , we do not require our local approximator to be interpretable itself as we learn a sparse explanation mask after learning the local approximator. This allows us to broaden the scope and complexity of the local approximator, thus, achieving the dual goals of expressibility and interpretability, whereas other existing models typically use simple models as the local approximators ribeiro2016should . Hence, RelEx is able to explain any black-box model as long as the local approximator is: i) locally faithful (this corresponds to how the model behaves in the vicinity of the instance being predicted), and ii) differentiable on the input adjacency matrix.

The modified objective function using the local approximator g instead of the black-box model f is given by,

(2)

where we replace by . Below, we discuss more details on the different components in our relational explainer: i) sampling strategy used to create the perturbations, ii) functions for learning the sparse mask, iii) regularization, and iv) diverse explanations.

Sampling Strategy   We adopt a modified breadth first search (BFS) sampling strategy, starting from node , where each connected edge has some fixed probability of being selected. We know that the nodes that are disjoint from will not affect a node’s embedding or prediction, so we have no probability of selecting these nodes using BFS. Any node in

has a chance to be sampled, so long as it is connected to an already selected node. We choose BFS as it encourages closer nodes to be selected more frequently. By sampling using BFS, we also ensure a higher value of variance amongst the farthest nodes in our samples. Thus, employing the BFS sampling strategy ensures that the closer nodes are to

, the higher their influence on the black-box’s prediction of and we want our samples to be “close" to in order for to be a local approximation of at . We do not select nodes outside of , as they have no effect on ’s prediction of . After each iteration of sampling, we get one connected perturbed subgraph. We construct the dataset by perturbing for multiple iterations. We learn the local approximator by training it on .

: RelEx with Sigmoid Mask   To get the mask , we set , where

is the sigmoid function, and

is the parameter we need to learn. , which means we learn a soft mask, and each element in the mask represents the importance of the corresponding edge. The objective function with the sigmoid mask is given by, . This optimization problem can be solved by gradient descent, where , and is the learning rate.

: RelEx with Gumbel-Softmax Mask   We introduce another mask based on Gumbel-softmax jang2016categorical , , where is our parameter to be learned, and . This directly gives us a set of edges and nodes, unlike the Sigmoid mask where we learn a soft mask and then use a threshold. Choosing a threshold could be difficult, because we need to use the right reason as reference. And sometimes it is challenging to find the optimal threshold as learned soft values are close to each other. Gumbel-softmax is a continuous distribution on the simplex that can approximate categorical samples, and whose parameter gradients can be easily computed via the reparameterization trick. We have an end-to-end relational structures learning framework by using Gumbel-Softmax based method.

Regularization   We incorporate regularization measures in our objective to ensure that the learned mask remains sparse and has better interpretability. We consider two regularization functions. First, we incorporate sparseness on the edges using norm. Second, we incorporate norm he2016adaptive on nodes and edges, given by . is a group sparseness measure, here we treat each row of adjacency matrix as one group. Thus, we pursue sparseness on both edges and nodes.

Diverse Explanations   There can be multiple explanations that exist for one prediction, but some are closer to the “right” reason ross2017right . We encourage diversity by learning different masks and maximizing the cross-entropy loss between any two masks using Equation 3.

(3)

where is the current mask to be learned, , , …, are previously learned masks, , and is cross entropy loss between two masks, is the weight of cross entropy loss.

Learning diverse explanations can increase the users’ trust of black-box models mothilal2020explaining . In our experiments in Section 5.4, we show that our approach learns diverse masks, where at least one of them corresponds to the right reason.

4 Evaluation Methods

4.1 Comparison with State-of-the-art Explainers

We make appropriate modifications to Anchors ribeiro2018anchors and Saliency Map to adapt them for the relational setting.

Relational Anchors   Anchor ribeiro2018anchors explanations are constructed by selecting the features of some instance we want to explain that maximizes precision. This is calculated by perturbing over all features except the anchor features, which are held constant. Precision is the proportion of the sample’s labels that do not change due to perturbations. A high precision anchor implies that the anchor features are most important to the prediction, because perturbing the other features has little or no effect. Approximating precision is difficult, as it requires many expensive calls to . Therefore, anchor construction is formulated as an instance of pure-exploration multi-armed bandit. To adapt Anchors to our setting, we consider the anchor features for relational explanations to be graph edges instead of node features. We define a threshold , which we vary based on how much the predictions of vary based on our perturbations. We calculate precision as, , where is the set of anchor edges, is a perturbed sample, and is the black box. Our perturbed samples have all edges in , and at most all edges in the computation graph of . Samples are generated via breadth-first search, similarly to RelEx, and is the predicted class we want to explain.

Saliency Map

  Since our approach involves learning an edge importance mask, we also compare our approach to Saliency Map, which is used in computer vision to learn the spatial support of a given target class in an image. To adapt Saliency Map to the relational setting, we first calculate the gradient of black-box model’s loss function with respect to the adjacency matrix

, and then normalize the gradient values to values between to . We use these values as the learned explanation.

4.2 Relational Explanation Evaluation Metrics

Area Under the ROC Curve

  We report area under the receiver operating characteristics curve (AUC-ROC) by capturing the deviation of the explanation from the ground-truth right reasons. AUC-ROC is calculated between the relational explanation and ground truth right reason structure. In many cases, we know the right structural reasons associated with a prediction as prior domain knowledge

ross2017right . For example, molecules or proteins have their own specific and identifiable structures. We can then evaluate our explanations by comparing them to these already known right reasons.

Infidelity Scores   Since it is hard to isolate errors that stem from the underlying black-box model from the errors that stem from the explainer, we consider the following quantitative measure known as infidelity yeh2019fidelity , , where represents significant perturbations around the node , gives the distribution of perturbation , represents the perturbed adjacency graph, and is our explainer. Infidelity measures the goodness of an explanation by quantifying the degree to which it captures how the predictor function itself changes in response to a significant perturbation.

5 Experiments

5.1 Experiments on Synthetic Relational Datasets with GNNs as the Black-box Model

We construct two kinds of node classification datasets: i) Tree-Grid, in which we use a binary tree with fixed height as the basic structure, and then we connect multiple grid structures to the tree, by randomly adding noisy links between nodes in a grid (referred to as grid nodes) and a tree (tree nodes), and ii) Tree-BA, we again use a binary tree as the basic structure (tree nodes), and then we connect multiple Barabasi Albert (BA) structures (BA nodes) to the tree, by randomly adding noise in the form of links between BA nodes and tree nodes. The prediction problem involves predicting the correct class to which each node belongs using the neighborhood topological structures of the node.

Explainer Saliency Map Relational Anchors GNN-Explainer RelEx RelEx
AUC-ROC 0.4352 0.5069 0.5666 0.5470 0.5873
Infidelity 0.1199 0.1110 0.0885 0.0893 0.0884
Table 2: AUC-ROC and infidelity for Tree-BA synthetic dataset.
Explainer Saliency Map Relational Anchors GNN-Explainer RelEx RelEx
AUC-ROC 0.1205 0.6871 0.8431 0.8261 0.8672
Infidelity 0.1317 0.0754 0.0782 0.0794 0.0735
Table 1: AUC-ROC and infidelity for Tree-Grid synthetic dataset.

We train a 3-layer GCN as the black-box on both the datasets, individually. We show quantitative results on Tree-Grid dataset in Table 2 and quantitative results on Tree-BA dataset in Table 2. Since both Tree-Grid and Tree-BA datasets are synthetically generated, we know the ground truth right reason structure and we use that to calculate the deviation of learned relational explanation from the right reason. From the AUC-ROC and infidelity results, we can see that RelEx has the best performance on both measures. We also note that Saliency Map fails to perform as well as other models, as the former is the only model that is not specifically tailored for relational models. This further ascertains that explainers designed for traditional iid models do not seamlessly work for relational models and we need explainers that are designed specifically for relational models.

5.2 Experiments on Synthetic Relational Dataset with HL-MRFs as the Black-box Model

We construct a three-class graph dataset, Tree-Grid-BA. We generate multiple tree, grid, and Barabasi Albert motifs, and randomly add noisy links among them to construct this graph dataset. For HL-MRF models, we design collective first-order logic rules in table 4. To train a PSL model, we randomly select half the nodes as observation, which are used as seed nodes.

PSL Collective Rules
Node: , , , ; Target Class:
: HasCat(A,cat) Link(A,B) HasCat(B,cat)
: HasCat(A,cat) Link(A,B) Link(B,C) HasCat(C, cat)
: HasCat(A,cat) Link(A,B) Link(B,C) Link(C,D) HasCat(D, cat)
Table 4: AUC-ROC and infidelity for TREE-GRID-BA synthetic dataset.
Explainer Relational Anchors RelEx RelEx
AUC-ROC 0.5221 0.7076 0.6284
Infidelity 0.0396 0.0310 0.0320
Table 3: 3-hop PSL collective Rules

Since GNN-Explainer and Saliency Map need access gradients, they cannot be applied to black-box HL-MRF models. Quantitative results are shown in Table 4, where we can see RelEx gets better results than RelEx, as the HL-MRF model assigns different continuous values of importance to links around the node of interest, which are captured by the learned rule weights. Thus, RelEx successfully learns corresponding importance values for each link. This shows the competence of both our approaches RelEx and RelEx across two different types of relational models. Figure 2 shows example explanation of a tree node, grid node, and BA node, respectively. We observe that the qualitative results are consistent with the quantitative results with RelEx, obtaining relational explanations that are closer to the actual right reason. We also can see that the RelEx model is able to glean the core topological structure that explains the prediction.

(a) Tree Computation Graph
(b) Tree Right Reason
(c) Tree RelEx
(d) Tree RelEx
(e) Grid Computation Graph
(f) Grid Right Reason
(g) Grid RelEx
(h) Grid RelEx
(i) BA Computation Graph
(j) BA Right Reason
(k) BA RelEx
(l) BA RelEx
Figure 2: Explanations for tree, grid, BA nodes.

5.3 Experiments on Molecule Dataset with GNNs as the Black-box Model

To demonstrate the applicability of our approach on a real-world dataset, we conduct experiments on MUTAG doi:10.1021/jm00106a046 , a well-known benchmark graph classification dataset. It consists of 188 mutagenic aromatic and heteroaromatic nitro compounds with 7 different kinds of atoms, including carbon, nitrogen and oxygen, etc. We have prior domain knowledge that carbon atoms have ring structures, which represent mutagenic aromatics in chemistry; nitrogen atoms and oxygen atoms combine to form the structure and nitrogen atoms also could exist either in pentagonal or hexagonal structures with other carbon atoms.

Explainer Saliency Map Relational Anchors GNN-Explainer RelEx RelEx
Infidelity 0.05879 0.06008 0.05557 0.05659 0.05573
Table 5: Infidelity on MUTAG dataset.

Table 5 shows the comparison results on infidelity, where we see that RelEx and GNN-Explainer both obtain similar best results. We demonstrate the qualitative performance of the models in Figures 4, and 4. In all the figures, yellow nodes are our nodes of interest. To plot explanations from soft important values instead of finding the optimal threshold, we choose to capture the edge importance using the color of the edge, where a darker color signifies that the edge has a higher importance. We see that explanations from GNN-Explainer and RelEx are plotted this way as they learn soft importance values for the edges in the relational explanation. In Figure 4, we observe that the explanation for a carbon node learned by RelEx finds the correct hexagonal ring structure and RelEx learns an explanation that contains two connected hexagonal rings, both of which capture the core relational structure (hexagonal ring) corresponding to the carbon node. Figure 4 shows explanations on a nitrogen node; all explainers except Relational Anchors are able to identify the correct -NO topological structure.

(a) Molecule
(b) Right Reason
(c) Relational Anchors
(d) GNN-Explainer
(e) RelEx
(f) RelEx
(a) Molecule
(b) Right Reason
(c) Relational Anchors
(d) GNN-Explainer
(e) RelEx
(f) RelEx
Figure 3: Relational explanations for a carbon atom.
Figure 4: Explanation on nitrogen atom in structure.
Figure 3: Relational explanations for a carbon atom.

5.4 Diverse Explanations on Molecule Dataset

We train diverse explainations for each node of interest. Figure 5 gives two example explanations learnt from the RelEx based explainer, where yellow nodes are our nodes of interest. Figure 5(a) shows the molecule, and Figures 5(b) and 5(c) give two diverse explanations for the same node. In Figure 5(a), we see that our node of interest is part of two ring structures, one of which is a pentagon and the other is a hexagon. The first explanation learns one pentagon ring structure, while the diverse second explanation finds both the ring structures. Though both are correct, the second explanation is more meaningful from the domain perspective as it gleans both the core relational structures that the node is part of. Similarly, in Figure 5(d), even though both explanations are able to learn the core hexagonal structure responsible for the prediction, we see that the first explanation in Figure 5(e) contains some noise, while the second diverse explanation in Figure 5(f) excludes the noise and is more preferable. Thus, the ability of our approach to learn diverse explanations comes handy for learning multiple “right” explanations, among which some make more sense from a domain perspective.

(a) Molecule
(b) Explanation 1
(c) Explanation 2
(d) Molecule
(e) Explanation 1
(f) Explanation 2
Figure 5: Diverse explanations

6 Conclusion

In this work, we developed a model-agnostic relational explainer, RelEx, which has the ability to explain any black-box relational model. Through rigorous experimentation and comparison with state-of-the-art explainers, we demonstrated the quantitative and qualitative capability of RelEx in explaining two different black-box relational models, GNNs, representing the deep graph neural network models, and HL-MRFs, representing statistical relational models, on two synthetic and one real-world graph datasets. The ability of RelEx to learn diverse explanations further enhances its practical value and applicability in explaining domain-specific predictions.

References

  • (1) Getoor, B. T. L., ed. Introduction to Statistical Relational Learning (Adaptive Computation and Machine Learning). 2007.
  • (2) Scarselli, F., M. Gori, A. C. Tsoi, et al. The graph neural network model. IEEE Transactions on Neural Networks, pages 61–80, 2008.
  • (3) Goodfellow, I. J., J. Shlens, C. Szegedy. Explaining and harnessing adversarial examples. arXiv, 2014.
  • (4) Koh, P. W., P. Liang. Understanding black-box predictions via influence functions. In ICML. 2017.
  • (5) Yeh, C.-K., J. Kim, I. E.-H. Yen, et al. Representer point selection for explaining deep neural networks. In NeurIPS. 2018.
  • (6) Sundararajan, M., A. Taly, Q. Yan. Axiomatic attribution for deep networks. In ICML. 2017.
  • (7) Smilkov, D., N. Thorat, B. Kim, et al. Smoothgrad: removing noise by adding noise. arXiv, 2017.
  • (8) Ribeiro, M. T., S. Singh, C. Guestrin. Why should i trust you?: Explaining the predictions of any classifier. In SIGKDD. 2016.
  • (9) —. Anchors: High-precision model-agnostic explanations. In

    Thirty-Second AAAI Conference on Artificial Intelligence

    . 2018.
  • (10) Ying, R., D. Bourgeois, J. You, et al. Gnn explainer: A tool for post-hoc explanation of graph neural networks. In NeurIPS. 2019.
  • (11) Ross, A. S., M. C. Hughes, F. Doshi-Velez. Right for the right reasons: Training differentiable models by constraining their explanations. arXiv, 2017.
  • (12) Bach, S. H., M. Broecheler, B. Huang, et al. Hinge-loss markov random fields and probabilistic soft logic. JMLR, pages 3846–3912, 2017.
  • (13) Kipf, T. N., M. Welling. Semi-supervised classification with graph convolutional networks. In ICLR. 2017.
  • (14) Xu, K., W. Hu, J. Leskovec, et al. How powerful are graph neural networks? In ICLR. 2019.
  • (15) Veličković, P., G. Cucurull, A. Casanova, et al. Graph Attention Networks. ICLR, 2018.
  • (16) Wu, F., A. Souza, T. Zhang, et al. Simplifying graph convolutional networks. In ICML. 2019.
  • (17) Ma, J., P. Cui, K. Kuang, et al. Disentangled graph convolutional networks. In ICML. 2019.
  • (18) Fey, M., J. E. Lenssen.

    Fast graph representation learning with PyTorch Geometric.

    In ICLR Workshop on Representation Learning on Graphs and Manifolds. 2019.
  • (19) He, J., Y. Zhang, Y. Zhou, et al.

    Adaptive stochastic gradient descent on the grassmannian for robust low-rank subspace recovery.

    IET Signal Processing, pages 1000–1008, 2016.
  • (20) Zhang, Y., A. Ramesh. Learning interpretable relational structures of hinge-loss markov random fields. In Proceedings of the 28th International Joint Conference on Artificial Intelligence. AAAI Press, 2019.
  • (21) Dehmamy, N., A.-L. Barabási, R. Yu. Understanding the representation power of graph neural networks in learning graph topology. In NeurIPS. 2019.
  • (22) Jang, E., S. Gu, B. Poole. Categorical reparameterization with gumbel-softmax. arXiv, 2016.
  • (23) Mothilal, R. K., A. Sharma, C. Tan. Explaining machine learning classifiers through diverse counterfactual explanations. In FAT. 2020.
  • (24) Yeh, C.-K., C.-Y. Hsieh, A. Suggala, et al. On the (in) fidelity and sensitivity of explanations. In NeurIPS. 2019.
  • (25) Debnath, A. K., R. L. Lopez de Compadre, G. Debnath, et al. Structure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds. correlation with molecular orbital energies and hydrophobicity. Journal of Medicinal Chemistry, pages 786–797, 1991.