1 Introduction
In the last decade, significant attention has been directed toward accurately modeling nonEuclidean, graphstructured data. Relational Models include Statistical relational learning (SRL) methods srl2007 , stochastic blockmodels, and the more recently developed Graph Neural Networks (GNNs) scarselli2008graph
. These relational models can be applied for a variety of tasks dealing with structured data, e.g. molecule classification, knowledge graph completion, and recommendation systems.
Along with relational models, progress has also been made in explaining the predictions of blackbox models. We use the term blackbox models to refer to models whose predictions are not inherently interpretable. For example, the pixels that are instrumental in a prediction is not often apparent for stateoftheart deep neural network models for many vision tasks goodfellow2014explaining . This has led to the emergence of models aimed at explaining the predictions of complex underlying models, however most underlying approaches are designed to only work on independent and identically distributed (iid) data. There are mainly two groups of these iid model explainers. The first group finds important data points, which have high influence on learnt model behavior, including influence function (IF) koh2017understanding , and representer points yeh2018representer . The second group finds feature attributes that are most influential to the final model decision sundararajan2017axiomatic ; smilkov2017smoothgrad ; ribeiro2016should ; ribeiro2018anchors . However, explaining relational data and relational models is significantly more challenging as it involves learning the right relational structure around the node of interest that explains the prediction. There is limited existing work on explaining relational data and relational models, in fact, to our knowledge, there is only one such technique GNNExplainer ying2019gnn , which has been designed for explaining models that consider dependencies amongst the data samples. GNNExplainer learns the most important neighbor nodes and links corresponding to why a GNN predicts some nodes as a specific class. This explanation is learned as masks over the adjacency and feature matrices, which are optimized by utilizing the gradients of the underlying GNN model.
In this paper, we present RelEx, a modelagnostic relational model explainer that learns relational explanations by treating the underlying model as a blackbox model. We construct explanations first by learning a local differentiable approximation of the blackbox model for some node of interest, trained over the perturbation space of this node. We then learn an interpretable mask over the local approximation.
Specifically, our contributions are as follows: i) We develop RelEx, which learns modelagnostic relational explanations for the task of node classification, with only access to the output prediction of the blackbox model for a specific input. Hence, RelEx can be applied to any relational model, from nondifferentiable statistical relational models to various GNNs. ii) RelEx can learn diverse explanations for each data instance by maximizing the crossentropy between two learned relational explanations. This provides end users with the muchneeded flexibility of choosing an explanation that is more appealing from a domain perspective, while remaining true to the underlying blackbox model. iii) We perform experiments on both synthetic and realworld datasets, comparing our relational explanations to the correct groundtruth relational structures (we refer to them as right reasons ross2017right ). We demonstrate that our approach is comparable to or better than the stateoftheart relational explainer, GNNExplainer, and relational extensions of other stateoftheart explainers in quantitative performance across all datasets, despite needing less information about the blackbox model than these approaches. We also illustrate the capability of RelEx to capture the core topological structures in the explanations in different classification tasks through qualitative results across all the datasets.
Thus, RelEx is modelagnostic and practically more feasible than existing approaches to explaining relational models. To the best of our knowledge, ours is the first generalpurpose modelagnostic relational explainer.
2 Related Work
In this work, we specifically focus on explaining two broad types of relational models: statistical relational learning, and graph neural networks. Since the primary focus of this work is explaining relational models, we only discuss the different relational models briefly before discussing explanation approaches.
Statistical relational learning (SRL) srl2007 is concerned with domain models that exhibit both uncertainty and complex, relational structure, where a user handcrafts firstorder logic rules to capture dependencies and reasoning. For example, the collective rule: , Spouse(B,A) Votes(A,C) Votes(B,C)
captures the increased probability of a spouse to vote for the same candidate in an election, as determined by the target variable
Votes and observed variable Spouse dependencies, and is the rule weight. HingeLoss Markov Random Fields (HLMRFs) bach2017hinge is an example of an SRL model which uses declarative language Probabilistic Soft Logic (PSL) to express probabilistic logic rules. The input of HLMRFs is a knowledge graph , where is a set of entities, is a set of relations, and is a set of observed relational data of the format , where and are entities and is a binary relation between entities. Example of such data tuple is Spouse(Alice, Bob). For each relation , we can represent our knowledge using adjacency matrix , such that its entry is 1 if and only if is in the knowledge graph. We choose HLMRFs as a representative SRL model to evaluate our relational explainer.GNNs are another more recent technique for modeling relational data, with a variety of popular architectures kipf2017semi ; xu2018how ; velickovic2018graph
. Given an adjacency matrix defining the relations among nodes, and a feature matrix describing the attributes of each node, a GNN will learn lowdimensional node representations, similar to classical feed forward neural networks. Representations are initialized to default node features, and are updated in each layer through degree normalized aggregations of its neighbor’s node representations. After training, these node representations capture the task relevant feature and structural information, and can then be used for a variety of machine learning tasks. In this paper, we evaluate
RelEx for explaining GNNs for the task of node classification.Posthoc explainers such as LIME ribeiro2016should have been developed to learn an interpretable local approximation of a blackbox, to explain single instances. Our relational explainer RelEx is motivated by this approach. Anchors ribeiro2018anchors were subsequently developed to make clear where LIME explanations apply. These approaches are modelagnostic, meaning they work regardless of how the blackbox model works (so long as it’s a model on iid data). Other explanation techniques involve using input gradients to learn feature importances. This includes SmoothGradsmilkov2017smoothgrad , and Integrate Gradientsundararajan2017axiomatic , where the basic idea behind them is to learn the saliency map by calculating gradient on input , however, they are designed specifically for image data.
There has been limited effort to improve the interpretability of deep learning based methods on graphs. Graph Attention Networks is an architecture which learns attention weights on each edge, which can be interpreted as an importance score velickovic2018graph . Another approach simplifies Graph Convolutional Networks (GCNs) by removing the nonlinear function applied to each layer pmlrv97wu19e
. It is shown that for many cases, performance is the same while the underlying classifier is equivalent to logistic regression and thus more interpretable. Another work attempts to disentangle node representations by capturing the latent factors in their neighborhoods
ma2019disentangled . While this can improve robustness and interpretability, GNNExplainer ying2019gnn is the only posthoc approach to provide explanations for particular node predictions.Our approach addresses the following caveats in existing work. First, our approach only needs access to the output predictions of the blackbox model, not the gradients. Thus, in contrast to GNNExplainer, our approach is capable of explaining any relational model, including nondifferentiable HLMRFs. Our approach also shines from a practical usability perspective, as some popular developed models Fey/Lenssen/2019 use indices of existing edges rather than adjacency matrices as input.
3 RelEx: Learningbased Relational Explainer
In this section, we develop our modelagnostic, learningbased relational explainer, RelEx, for explaining predictions of relational models. RelEx learns which nodes and edges in the neighborhood of the node of interest are most influential in the blackbox prediction. The output of RelEx is a neighborhood relational structure that is instrumental in the blackbox prediction. Figure 1 gives the overall architecture of RelEx and identifies the different components and their notations, which we use in the equations in this paper.
3.1 RelEx Problem Formulation
Let node be the node we want to explain. We denote the nhop neighborhood of (i.e., the computation graph) by the adjacency matrix , which contains a total of nodes and edges. The features of these neighborhood nodes are stored in the feature matrix . We want to learn an explanation, where given some blackbox relational model , we learn the most salient nodes and links in that are instrumental in ’s prediction of . We refer to the predicted class of as .
is represented as a onehot vector.
After defining the problem setting, we can see that the problem of learning a relational explanation for node entails selecting nodes and edges from , the computation graph of . Hence, to select the salient graph structures, we learn a sparse mask over . Our relational explainer RelEx is denoted by , which is a sparse adjacency matrix consisting of the nodes and edges crucial to the explanation.
is learned by optimizing for the below loss function,
(1) 
where the loss function L is any distance measure between our class of interest
and the probability distribution over classes predicted by the underlying blackbox model
on the denoised computation graph. Common choices for L are negative loglikelihood and KLdivergence. We represent the explanation as , where denotes elementwise multiplication, and is the mask we are optimizing for. is a sparseness measure on the mask , which could be norm or a group sparseness measure like norm he2016adaptive . We discuss with greater detail in the sections to follow. As finding the optimal solution to Equation 1 is a combinatorial problem, a bruteforce approach has the time complexity. There are a variety of heuristic search based solutions to this optimization problem, including multiarmed bandit
ribeiro2018anchors, reinforcement learning methods
zhang2019learning .3.2 RelEx Architecture
Here, we present RelEx, a more effective and efficient learningbased solution for explaining relational models. We first provide a general overview of our approach and then expand on the different components in the following paragraphs. To design RelEx, our first goal is to learn a local approximator of at the node whose prediction we are interested in explaining. We call this approximator , where is a perturbation on the computation graph of . Since is a relational graph model, we choose to be a naive Graph Convolutional Network (GCN), owing to its powerful fitting and representation ability, while simultaneously being easy to use. Specifically, we use a residual architecture dehmamy2019understanding , where we concatenate the output of every GCN layer to the final output of the network. Residual architecture increases the representation power of GCN in learning graph topology without stacking more layers or adding more parameters.
To learn , we first follow a sampling strategy to get perturbed computation graphs . We then query on all samples of to get dataset , where , is the number of samples, and . In contrast to LIME ribeiro2016should , we do not require our local approximator to be interpretable itself as we learn a sparse explanation mask after learning the local approximator. This allows us to broaden the scope and complexity of the local approximator, thus, achieving the dual goals of expressibility and interpretability, whereas other existing models typically use simple models as the local approximators ribeiro2016should . Hence, RelEx is able to explain any blackbox model as long as the local approximator is: i) locally faithful (this corresponds to how the model behaves in the vicinity of the instance being predicted), and ii) differentiable on the input adjacency matrix.
The modified objective function using the local approximator g instead of the blackbox model f is given by,
(2) 
where we replace by . Below, we discuss more details on the different components in our relational explainer: i) sampling strategy used to create the perturbations, ii) functions for learning the sparse mask, iii) regularization, and iv) diverse explanations.
Sampling Strategy We adopt a modified breadth first search (BFS) sampling strategy, starting from node , where each connected edge has some fixed probability of being selected. We know that the nodes that are disjoint from will not affect a node’s embedding or prediction, so we have no probability of selecting these nodes using BFS. Any node in
has a chance to be sampled, so long as it is connected to an already selected node. We choose BFS as it encourages closer nodes to be selected more frequently. By sampling using BFS, we also ensure a higher value of variance amongst the farthest nodes in our samples. Thus, employing the BFS sampling strategy ensures that the closer nodes are to
, the higher their influence on the blackbox’s prediction of and we want our samples to be “close" to in order for to be a local approximation of at . We do not select nodes outside of , as they have no effect on ’s prediction of . After each iteration of sampling, we get one connected perturbed subgraph. We construct the dataset by perturbing for multiple iterations. We learn the local approximator by training it on .: RelEx with Sigmoid Mask To get the mask , we set , where
is the sigmoid function, and
is the parameter we need to learn. , which means we learn a soft mask, and each element in the mask represents the importance of the corresponding edge. The objective function with the sigmoid mask is given by, . This optimization problem can be solved by gradient descent, where , and is the learning rate.: RelEx with GumbelSoftmax Mask We introduce another mask based on Gumbelsoftmax jang2016categorical , , where is our parameter to be learned, and . This directly gives us a set of edges and nodes, unlike the Sigmoid mask where we learn a soft mask and then use a threshold. Choosing a threshold could be difficult, because we need to use the right reason as reference. And sometimes it is challenging to find the optimal threshold as learned soft values are close to each other. Gumbelsoftmax is a continuous distribution on the simplex that can approximate categorical samples, and whose parameter gradients can be easily computed via the reparameterization trick. We have an endtoend relational structures learning framework by using GumbelSoftmax based method.
Regularization We incorporate regularization measures in our objective to ensure that the learned mask remains sparse and has better interpretability. We consider two regularization functions. First, we incorporate sparseness on the edges using norm. Second, we incorporate norm he2016adaptive on nodes and edges, given by . is a group sparseness measure, here we treat each row of adjacency matrix as one group. Thus, we pursue sparseness on both edges and nodes.
Diverse Explanations There can be multiple explanations that exist for one prediction, but some are closer to the “right” reason ross2017right . We encourage diversity by learning different masks and maximizing the crossentropy loss between any two masks using Equation 3.
(3) 
where is the current mask to be learned, , , …, are previously learned masks, , and is cross entropy loss between two masks, is the weight of cross entropy loss.
Learning diverse explanations can increase the users’ trust of blackbox models mothilal2020explaining . In our experiments in Section 5.4, we show that our approach learns diverse masks, where at least one of them corresponds to the right reason.
4 Evaluation Methods
4.1 Comparison with Stateoftheart Explainers
We make appropriate modifications to Anchors ribeiro2018anchors and Saliency Map to adapt them for the relational setting.
Relational Anchors Anchor ribeiro2018anchors explanations are constructed by selecting the features of some instance we want to explain that maximizes precision. This is calculated by perturbing over all features except the anchor features, which are held constant. Precision is the proportion of the sample’s labels that do not change due to perturbations. A high precision anchor implies that the anchor features are most important to the prediction, because perturbing the other features has little or no effect. Approximating precision is difficult, as it requires many expensive calls to . Therefore, anchor construction is formulated as an instance of pureexploration multiarmed bandit. To adapt Anchors to our setting, we consider the anchor features for relational explanations to be graph edges instead of node features. We define a threshold , which we vary based on how much the predictions of vary based on our perturbations. We calculate precision as, , where is the set of anchor edges, is a perturbed sample, and is the black box. Our perturbed samples have all edges in , and at most all edges in the computation graph of . Samples are generated via breadthfirst search, similarly to RelEx, and is the predicted class we want to explain.
Saliency Map
Since our approach involves learning an edge importance mask, we also compare our approach to Saliency Map, which is used in computer vision to learn the spatial support of a given target class in an image. To adapt Saliency Map to the relational setting, we first calculate the gradient of blackbox model’s loss function with respect to the adjacency matrix
, and then normalize the gradient values to values between to . We use these values as the learned explanation.4.2 Relational Explanation Evaluation Metrics
Area Under the ROC Curve
We report area under the receiver operating characteristics curve (AUCROC) by capturing the deviation of the explanation from the groundtruth right reasons. AUCROC is calculated between the relational explanation and ground truth right reason structure. In many cases, we know the right structural reasons associated with a prediction as prior domain knowledge
ross2017right . For example, molecules or proteins have their own specific and identifiable structures. We can then evaluate our explanations by comparing them to these already known right reasons.Infidelity Scores Since it is hard to isolate errors that stem from the underlying blackbox model from the errors that stem from the explainer, we consider the following quantitative measure known as infidelity yeh2019fidelity , , where represents significant perturbations around the node , gives the distribution of perturbation , represents the perturbed adjacency graph, and is our explainer. Infidelity measures the goodness of an explanation by quantifying the degree to which it captures how the predictor function itself changes in response to a significant perturbation.
5 Experiments
5.1 Experiments on Synthetic Relational Datasets with GNNs as the Blackbox Model
We construct two kinds of node classification datasets: i) TreeGrid, in which we use a binary tree with fixed height as the basic structure, and then we connect multiple grid structures to the tree, by randomly adding noisy links between nodes in a grid (referred to as grid nodes) and a tree (tree nodes), and ii) TreeBA, we again use a binary tree as the basic structure (tree nodes), and then we connect multiple Barabasi Albert (BA) structures (BA nodes) to the tree, by randomly adding noise in the form of links between BA nodes and tree nodes. The prediction problem involves predicting the correct class to which each node belongs using the neighborhood topological structures of the node.
Explainer  Saliency Map  Relational Anchors  GNNExplainer  RelEx  RelEx 

AUCROC  0.4352  0.5069  0.5666  0.5470  0.5873 
Infidelity  0.1199  0.1110  0.0885  0.0893  0.0884 
Explainer  Saliency Map  Relational Anchors  GNNExplainer  RelEx  RelEx 

AUCROC  0.1205  0.6871  0.8431  0.8261  0.8672 
Infidelity  0.1317  0.0754  0.0782  0.0794  0.0735 
We train a 3layer GCN as the blackbox on both the datasets, individually. We show quantitative results on TreeGrid dataset in Table 2 and quantitative results on TreeBA dataset in Table 2. Since both TreeGrid and TreeBA datasets are synthetically generated, we know the ground truth right reason structure and we use that to calculate the deviation of learned relational explanation from the right reason. From the AUCROC and infidelity results, we can see that RelEx has the best performance on both measures. We also note that Saliency Map fails to perform as well as other models, as the former is the only model that is not specifically tailored for relational models. This further ascertains that explainers designed for traditional iid models do not seamlessly work for relational models and we need explainers that are designed specifically for relational models.
5.2 Experiments on Synthetic Relational Dataset with HLMRFs as the Blackbox Model
We construct a threeclass graph dataset, TreeGridBA. We generate multiple tree, grid, and Barabasi Albert motifs, and randomly add noisy links among them to construct this graph dataset. For HLMRF models, we design collective firstorder logic rules in table 4. To train a PSL model, we randomly select half the nodes as observation, which are used as seed nodes.
PSL Collective Rules 

Node: , , , ; Target Class: 
: HasCat(A,cat) Link(A,B) HasCat(B,cat) 
: HasCat(A,cat) Link(A,B) Link(B,C) HasCat(C, cat) 
: HasCat(A,cat) Link(A,B) Link(B,C) Link(C,D) HasCat(D, cat) 
Explainer  Relational Anchors  RelEx  RelEx 

AUCROC  0.5221  0.7076  0.6284 
Infidelity  0.0396  0.0310  0.0320 
Since GNNExplainer and Saliency Map need access gradients, they cannot be applied to blackbox HLMRF models. Quantitative results are shown in Table 4, where we can see RelEx gets better results than RelEx, as the HLMRF model assigns different continuous values of importance to links around the node of interest, which are captured by the learned rule weights. Thus, RelEx successfully learns corresponding importance values for each link. This shows the competence of both our approaches RelEx and RelEx across two different types of relational models. Figure 2 shows example explanation of a tree node, grid node, and BA node, respectively. We observe that the qualitative results are consistent with the quantitative results with RelEx, obtaining relational explanations that are closer to the actual right reason. We also can see that the RelEx model is able to glean the core topological structure that explains the prediction.
5.3 Experiments on Molecule Dataset with GNNs as the Blackbox Model
To demonstrate the applicability of our approach on a realworld dataset, we conduct experiments on MUTAG doi:10.1021/jm00106a046 , a wellknown benchmark graph classification dataset. It consists of 188 mutagenic aromatic and heteroaromatic nitro compounds with 7 different kinds of atoms, including carbon, nitrogen and oxygen, etc. We have prior domain knowledge that carbon atoms have ring structures, which represent mutagenic aromatics in chemistry; nitrogen atoms and oxygen atoms combine to form the structure and nitrogen atoms also could exist either in pentagonal or hexagonal structures with other carbon atoms.
Explainer  Saliency Map  Relational Anchors  GNNExplainer  RelEx  RelEx 
Infidelity  0.05879  0.06008  0.05557  0.05659  0.05573 
Table 5 shows the comparison results on infidelity, where we see that RelEx and GNNExplainer both obtain similar best results. We demonstrate the qualitative performance of the models in Figures 4, and 4. In all the figures, yellow nodes are our nodes of interest. To plot explanations from soft important values instead of finding the optimal threshold, we choose to capture the edge importance using the color of the edge, where a darker color signifies that the edge has a higher importance. We see that explanations from GNNExplainer and RelEx are plotted this way as they learn soft importance values for the edges in the relational explanation. In Figure 4, we observe that the explanation for a carbon node learned by RelEx finds the correct hexagonal ring structure and RelEx learns an explanation that contains two connected hexagonal rings, both of which capture the core relational structure (hexagonal ring) corresponding to the carbon node. Figure 4 shows explanations on a nitrogen node; all explainers except Relational Anchors are able to identify the correct NO topological structure.
5.4 Diverse Explanations on Molecule Dataset
We train diverse explainations for each node of interest. Figure 5 gives two example explanations learnt from the RelEx based explainer, where yellow nodes are our nodes of interest. Figure 5(a) shows the molecule, and Figures 5(b) and 5(c) give two diverse explanations for the same node. In Figure 5(a), we see that our node of interest is part of two ring structures, one of which is a pentagon and the other is a hexagon. The first explanation learns one pentagon ring structure, while the diverse second explanation finds both the ring structures. Though both are correct, the second explanation is more meaningful from the domain perspective as it gleans both the core relational structures that the node is part of. Similarly, in Figure 5(d), even though both explanations are able to learn the core hexagonal structure responsible for the prediction, we see that the first explanation in Figure 5(e) contains some noise, while the second diverse explanation in Figure 5(f) excludes the noise and is more preferable. Thus, the ability of our approach to learn diverse explanations comes handy for learning multiple “right” explanations, among which some make more sense from a domain perspective.
6 Conclusion
In this work, we developed a modelagnostic relational explainer, RelEx, which has the ability to explain any blackbox relational model. Through rigorous experimentation and comparison with stateoftheart explainers, we demonstrated the quantitative and qualitative capability of RelEx in explaining two different blackbox relational models, GNNs, representing the deep graph neural network models, and HLMRFs, representing statistical relational models, on two synthetic and one realworld graph datasets. The ability of RelEx to learn diverse explanations further enhances its practical value and applicability in explaining domainspecific predictions.
References
 (1) Getoor, B. T. L., ed. Introduction to Statistical Relational Learning (Adaptive Computation and Machine Learning). 2007.
 (2) Scarselli, F., M. Gori, A. C. Tsoi, et al. The graph neural network model. IEEE Transactions on Neural Networks, pages 61–80, 2008.
 (3) Goodfellow, I. J., J. Shlens, C. Szegedy. Explaining and harnessing adversarial examples. arXiv, 2014.
 (4) Koh, P. W., P. Liang. Understanding blackbox predictions via influence functions. In ICML. 2017.
 (5) Yeh, C.K., J. Kim, I. E.H. Yen, et al. Representer point selection for explaining deep neural networks. In NeurIPS. 2018.
 (6) Sundararajan, M., A. Taly, Q. Yan. Axiomatic attribution for deep networks. In ICML. 2017.
 (7) Smilkov, D., N. Thorat, B. Kim, et al. Smoothgrad: removing noise by adding noise. arXiv, 2017.
 (8) Ribeiro, M. T., S. Singh, C. Guestrin. Why should i trust you?: Explaining the predictions of any classifier. In SIGKDD. 2016.

(9)
—.
Anchors: Highprecision modelagnostic explanations.
In
ThirtySecond AAAI Conference on Artificial Intelligence
. 2018.  (10) Ying, R., D. Bourgeois, J. You, et al. Gnn explainer: A tool for posthoc explanation of graph neural networks. In NeurIPS. 2019.
 (11) Ross, A. S., M. C. Hughes, F. DoshiVelez. Right for the right reasons: Training differentiable models by constraining their explanations. arXiv, 2017.
 (12) Bach, S. H., M. Broecheler, B. Huang, et al. Hingeloss markov random fields and probabilistic soft logic. JMLR, pages 3846–3912, 2017.
 (13) Kipf, T. N., M. Welling. Semisupervised classification with graph convolutional networks. In ICLR. 2017.
 (14) Xu, K., W. Hu, J. Leskovec, et al. How powerful are graph neural networks? In ICLR. 2019.
 (15) Veličković, P., G. Cucurull, A. Casanova, et al. Graph Attention Networks. ICLR, 2018.
 (16) Wu, F., A. Souza, T. Zhang, et al. Simplifying graph convolutional networks. In ICML. 2019.
 (17) Ma, J., P. Cui, K. Kuang, et al. Disentangled graph convolutional networks. In ICML. 2019.

(18)
Fey, M., J. E. Lenssen.
Fast graph representation learning with PyTorch Geometric.
In ICLR Workshop on Representation Learning on Graphs and Manifolds. 2019. 
(19)
He, J., Y. Zhang, Y. Zhou, et al.
Adaptive stochastic gradient descent on the grassmannian for robust lowrank subspace recovery.
IET Signal Processing, pages 1000–1008, 2016.  (20) Zhang, Y., A. Ramesh. Learning interpretable relational structures of hingeloss markov random fields. In Proceedings of the 28th International Joint Conference on Artificial Intelligence. AAAI Press, 2019.
 (21) Dehmamy, N., A.L. Barabási, R. Yu. Understanding the representation power of graph neural networks in learning graph topology. In NeurIPS. 2019.
 (22) Jang, E., S. Gu, B. Poole. Categorical reparameterization with gumbelsoftmax. arXiv, 2016.
 (23) Mothilal, R. K., A. Sharma, C. Tan. Explaining machine learning classifiers through diverse counterfactual explanations. In FAT. 2020.
 (24) Yeh, C.K., C.Y. Hsieh, A. Suggala, et al. On the (in) fidelity and sensitivity of explanations. In NeurIPS. 2019.
 (25) Debnath, A. K., R. L. Lopez de Compadre, G. Debnath, et al. Structureactivity relationship of mutagenic aromatic and heteroaromatic nitro compounds. correlation with molecular orbital energies and hydrophobicity. Journal of Medicinal Chemistry, pages 786–797, 1991.
Comments
There are no comments yet.