Explaining Deep Graph Networks with Molecular Counterfactuals

11/09/2020 ∙ by Danilo Numeroso, et al. ∙ University of Pisa 0

We present a novel approach to tackle explainability of deep graph networks in the context of molecule property prediction tasks, named MEG (Molecular Explanation Generator). We generate informative counterfactual explanations for a specific prediction under the form of (valid) compounds with high structural similarity and different predicted properties. We discuss preliminary results showing how the model can convey non-ML experts with key insights into the learning model focus in the neighborhood of a molecule.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The prediction of functional and structural properties of molecules by machine learning models for graphs is a research field with long-standing roots

Micheli et al. (2007). Much of current research on the topic relies on Deep Graph Networks (DGNs) Zhou et al. (2018a); Bacciu et al. (2020a), as they provide a flexible and scalable means to learn effective vectorial representations of the molecules. This has resulted in a trail of works targeting increasing levels of effectiveness, breadth and performance in the prediction of chemo-physical properties Gilmer et al. (2017). The scarce intelligibility of such models and of the internal representation they develop can, however, act as a show-stopper for their consolidation, e.g. to predict safety-critical molecule properties, especially when considering well known issues of opacity in DGN assessment Errica et al. (2020). In this respect, attention is building towards the development of interpretability techniques specifically tailored to DGNs. While some DGN shows potential for interpretability by-design thanks to its probabilistic formulation Bacciu et al. (2020b), the majority of works in literature take a neural-based approach which requires the use of an external model explainer. GNNExplainer Ying et al. (2019) is the front-runner of the model-agnostic methods providing local explanations to neural DGNs in terms of the sub-graph and node features of the input structure which maximally contribute to the prediction. RelEx Zhang et al. (2020) extends GNNExplainer to surpass the need of accessing the model gradient to learn explanations. GraphLIME Huang et al. (2020)

attempts to create locally interpretable models for node-level predictions, with application limited to single network data. This paper fits into this pioneering field of research by taking a novel angle to the problem, targeting the generation of interpretable insights for the primary use of the experts of the molecular domain. We build our approach upon the assumption that a domain expert would be interested in understanding the model prediction for a specific molecule based on differential case-based reasoning against counterfactuals, i.e. similar structures which the model considers radically different with respect to the predicted property. Such counterfactual molecules should allow the expert to understand if the structure-to-function mapping learned by the model is coherent with the consolidated domain knowledge, at least for what pertains a tight neighborhood around the molecule under study. We tackle the problem of counterfactual molecule generation by introducing an explanatory agent based on reinforcement learning (RL)

Sutton and Barto (1998). This explanatory agent has access to the internal representation of the property-prediction model as well as to its output and uses this information to guide the exploration of the molecular structure space to seek for the nearest counterfactuals. Our approach is specifically thought for molecular applications and the RL agent leverages domain knowledge to constrain the generated explanations to be valid molecules. We test our explainer on DGNs tackling the prediction of toxicity (classification task) and solubility (regression task) of chemical compounds.

2 Molecular Explanation Generator (MEG)

Figure 1: DGN is a trained molecule property predictor, whereas the Explainer is a generative agent producing counterfactuals, constrained by prior domain knowledge .

The overall architecture of our explanation framework, named MEG, is depicted in Figure 1. Here we denote with a DGN that is fit to solve a molecular property prediction task. represents the space of (labelled) molecule structures and is the task-dependent output space. The Explainer is an RL agent implementing a generative function targeting the generation of counterfactual explanations. Molecular counterfactuals ought to satisfy three properties: (i) they need to resemble the molecule under study; (ii) predicted properties on counterfactuals must differ substantially from those predicted on the input one; (iii) molecular counterfactuals need to be in compliance with chemical constraints. To this end, the agent receives information about an input molecule and its associated prediction score , and generates a molecular counterfactual , leveraging prior domain knowledge to ensure validity of the generated sample. Counterfactual generation is formalised as a maximisation problem in which, given a target molecule with prediction , the generator is trained to optimize:

(1)

The composition formalizes the model counter-predictions, made over the counterfactuals produced by . Given the counterfactual we rewrite Equation 1 as

(2)

where is a measure of prediction disagreement between the molecule and its counterfactual , while measures similarity. In our framework, is used to bootstrap the generative process in which operates on the current candidate counterfactual with graph editing operations under domain knowledge constraints. Given the non-differentiable nature of the graph alterations, we model through a multi-objective RL problem Liu et al. (2015), that takes the form of an MDP(, , , , , ). Apart from well known differentiability issues of graph operations, the generator is modeled as an RL agent for its ease in modelling and handling multi-objective optimization. This allows to easily steer towards the generation of counterfactuals optimizing several properties at a time Sanchez-Lengeling et al. (2017); Popova et al. (2018). Since we are interested in generating counterfactuals that are compliant to chemical knowledge, the action space is restricted so as to only retain actions that preserve the chemical validity of the molecule. To this end, we base the implementation of our agent on the MolDQN Zhou et al. (2018b) model, that is an RL-based approach to molecule graph generation leveraging double Q-learning van Hasselt et al. (2015). At each step, the reward function exploits the prediction from so as to notify the agent of its current performance, emitting a scalar reward. In our design, binds together a term regulating the change in prediction scores, which is inherently task-dependent, with a second term controlling similarity between the original molecule and its counterfactual, as presented in Equation 1. Currently, we have explored two formulations for the latter term. The former leverages the Tanimoto similarity over the Morgan fingerprints Rogers and Hahn (2010). The latter is a -model dependent metric exploiting the encoding of molecules in the DGN internal representation. An advantage of using the latter approach is that it takes into account the model’s own perception of structural similarity between molecules.

The leftmost term in Equation 2 can be specialized for classification and regression tasks. As regards classifications, given a set of classes , a model

emits a probability distribution

over the predicted classes. In this case, given an input-prediction pair , the generator is trained to produce counterfactual explanations minimising the prediction score for class , as follows

(3)

where is a hyper-parameter weighing the two parts. Hence, the model

returns at each step a smooth reward, which is actually the inverse of the probability of

belonging to class . Differently, for a regression task, the objective function can be defined as

(4)

where is the sign function, is the regression target, and and are the predicted values for the original molecule and its counterfactual, respectively. The sign function is needed to prevent the agent from generating molecules whose predicted scores move towards the original target, by providing negative rewards.

The main use of counterfactual explanations is to provide insights into the function learned by the model . In this sense, a set of counterfactuals for a molecule may be used to: (i) identify changes to the molecular structure leading to substantial changes in the properties, enabling domain experts to discriminate whether the model predictions are well founded; (ii) validate existing interpretability approaches, by running them on both the original input graph and its related counterfactual explanations. The main idea behind this latter point is that a local interpretation method may provide explanations that work well within a very narrow range of the input, but do not give a strong suggestion on a wider behaviour. To show evidence and usefulness of such a differential analysis, in the following section we use our counterfactuals to assess the quality of explanations given by GNNExplainer Ying et al. (2019). Given the undirected nature of the graphs in our molecular application, we restrict the original GNNExplainer model to discard the effect of edge orientation on the explanation.

3 Experimental Evaluation

((a)) A0
((b)) A1
((c)) A2
((d)) A3
Figure 2: Experimental results for the Tox21 sample, reported in Table 1.
((a)) B0
((b)) B1
((c)) B2
((d)) B3
Figure 3: ESOL sample alongside its counterfactuals (B1-3). Quantitative results are reported in Table 1.

We discuss a preliminary assessment of our explanations on two popular molecular property prediction benchmarks: Tox21 Kersting et al. (2016), addressing toxicity prediction as a binary classification task, and ESOL Wu et al. (2017), that is a regressive task on water solubility of chemical compounds. Preliminarily, we scanned both datasets to filter non-valid chemical compounds. We considered structures to be valid molecules if they pass the RDKit Landrum and others (2006) sanitization check. In the end, Tox21 comprises 1900 samples, equally distributed among the two classes, while ESOL includes 1129 compounds.

The trained DGN comprises three GraphConv Morris et al. (2018)

layers with ReLu activations, whose hidden size is

per layer for Tox21, and for ESOL. The network builds a layer-wise molecular representation via concatenation of max and mean pooling operations, over the set of node representations. The final neural encoding of the molecule is obtained by sum-pooling of the intermediate representations. This neural encoding is then feed to a three-layer feed-forward network, with hidden sizes of [128, 64, 32], to perform the final property prediction step. The trained DGNs achieved 87% of accuracy and 0.52 MSE over the Tox21 and ESOL test sets, respectively. All experiments have been performed by using the Adam optimiser with a learning rate of . During generation, we employed MEG to find the best counterfactual explanations for each molecule in test, ranked according to the multi-objective score in Section 2. Ideally, we would like to observe counterfactual molecules that are structurally similar to the original compound while leading to a substantially different prediction. Due to the stringent page constraints, in the following we report two example explanation cases (one for each dataset). Further examples and results are available in the appendix.

Molecule Target Prediction Similarity Reward
A0: Figure 2 NoTox NoTox (0.70) - -
A1: Figure 2 - Tox (0.90) 0.76 0.80
A2: Figure 2 - Tox (0.83) 0.79 0.72
A3: Figure 2 - Tox (0.80) 0.68 0.66
B0: Figure 3 -4.28 -4.01 - -
B1: Figure 3 - -6.11 0.29 1.14
B2: Figure 3 - -5.93 0.31 1.11
B3: Figure 3 - -5.07 0.28 0.66
Table 1: Summary of preliminary results. A0 and B0 refers to molecules belonging to Tox21 and ESOL, respectively. Subsequent indexes refers to the related counterfactuals explanations.

We present some quantitative result in Table 1

, listing the best three counterfactual explanations collected, for both tasks. We tested two similarity metrics: cosine similarity over the neural encodings, for Tox21, and the Tanimoto, for ESOL. Qualitative results are shown in

Figure 2 and Figure 3

. To ease the interpretation of our results, counterfactual modifications have been highlighted in red, while blurred edges represent those edges that have been masked out by GNNExplainer predictions. In other words, GNNExplainer interpretations are the sub-graphs formed by non-blurred edges. As for the Tox21 sample, we evaluate MEG against a test molecule (i.e, A0) that has been correctly classified by the DGN as being non-toxic, outputting the counterfactuals A1-3 (i.e. molecules which the model considers toxic). We can see that the addition of a carbon atom may alter the DGN prediction, as shown by A1 and A2. In fact, while A0 is classified correctly with 70% certainty, A1-2 are predicted as toxic, with certainty of 90% and 83%, respectively. Differently, A3 breaks the left side ring and achieves the lowest neural encoding similarity score among the three, giving clues about potential substructure-awareness. Furthermore, in

Figure 2 we show how counterfactuals may help to detect inconsistencies in GNNExplainer predictions. In fact, although GNNExplainer identifies the substructure CC(N)O as explanation for the original sample A0, MEG counterfactuals prioritize changes to different molecule fragments. These inconsistencies suggest that the GNNExplainer interpretation is too much targeted to the input molecule (A0) and does not generalize even for minor modifications of the input graph.

We now turn our attention to ESOL results (B0-3) shown in Table 1. B0 is an organic compound named pentachlorophenol, commonly used as a pesticide or a disinfectant, and is characterized by nearly absolute insolubility in water. While the DGN achieved good predictive performance for its aqueous solubility value, the counterfactuals underlined that the -model predicted solubility decreases in case the oxygen atom is removed (e.g, B2), or modified somehow (e.g, B1, B3), highlighting how it is highly relevant for the DGN prediction. As in the Tox21 sample, such relation is not adequately captured by GNNExplainer explanation for B0. It is our hope that, based on our interpretability approach, an expert of the molecular domain could be able to gain a better insight into the whether the properties and patterns captured by the predictive model are meaningful from a chemical standpoint.

4 Conclusions

We have presented MEG, a novel interpretability framework that tackles explainability in the chemical domain by generation of molecular counterfactual explanations. MEG can work with any DGN model as we only exploit input-output properties of such models. As a general comment of the preliminary results, one can note that while a local approach such as GNNExplainer may give good approximations when it comes to explaining the specific prediction, it lacks sufficient breadth to characterize the model behaviour already in a near vicinity of the sample under consideration. On the other hand, our counterfactual interpretation approach may find new samples which are likely to highlight the causes of a given model prediction, providing a better approximation to a locally interpretable model, e.g. B1-3 in Figure 2. In conclusion, apart for its value in generating explanations that are well understood by a domain expert, MEG proposes itself both as a sanity checker for other local model explainers, as well as a sampling method to strengthen the coverage and validity of local interpretable explanations, such as in the original LIME method for vectorial data Ribeiro et al. (2016).

References

  • [1] D. Bacciu, F. Errica, A. Micheli, and M. Podda (2020)

    A gentle introduction to deep learning for graphs

    .
    Neural Networks 129, pp. 203–221. External Links: Link, Document Cited by: §1.
  • [2] D. Bacciu, F. Errica, and A. Micheli (2020) Probabilistic learning on graphs via contextual architectures. Journal of Machine Learning Research 21 (134), pp. 1–39. External Links: Link Cited by: §1.
  • [3] F. Errica, M. Podda, D. Bacciu, and A. Micheli (2020) A fair comparison of graph neural networks for graph classification. In International Conference on Learning Representations, External Links: Link Cited by: §1.
  • [4] J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals, and G. E. Dahl (2017-06–11 Aug) Neural message passing for quantum chemistry. D. Precup and Y. W. Teh (Eds.), Proceedings of Machine Learning Research, Vol. 70, International Convention Centre, Sydney, Australia, pp. 1263–1272. Cited by: §1.
  • [5] Q. Huang, M. Yamada, Y. Tian, D. Singh, D. Yin, and Y. Chang (2020) GraphLIME: local interpretable model explanations for graph neural networks. External Links: 2001.06216 Cited by: §1.
  • [6] K. Kersting, N. M. Kriege, C. Morris, P. Mutzel, and M. Neumann (2016) Benchmark data sets for graph kernels. External Links: Link Cited by: §3.
  • [7] G. Landrum et al. (2006)

    RDKit: open-source cheminformatics

    .
    Cited by: §3.
  • [8] C. Liu, X. Xu, and D. Hu (2015) Multiobjective reinforcement learning: a comprehensive overview. IEEE Transactions on Systems, Man, and Cybernetics: Systems 45 (3), pp. 385–398. Cited by: §2.
  • [9] A. Micheli, A. Sperduti, and A. Starita (2007) An introduction to recursive neural networks and kernel methods for cheminformatics. Current Pharmaceutical Design 13 (8). Cited by: §1.
  • [10] C. Morris, M. Ritzert, M. Fey, W. L. Hamilton, J. E. Lenssen, G. Rattan, and M. Grohe (2018) Weisfeiler and leman go neural: higher-order graph neural networks. External Links: 1810.02244 Cited by: §3.
  • [11] M. Popova, O. Isayev, and A. Tropsha (2018-07) Deep reinforcement learning for de novo drug design. Science Advances 4 (7), pp. eaap7885. External Links: ISSN 2375-2548, Link, Document Cited by: §2.
  • [12] M. T. Ribeiro, S. Singh, and C. Guestrin (2016) "Why should I trust you?": explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 13-17, 2016, pp. 1135–1144. Cited by: §4.
  • [13] D. Rogers and M. Hahn (2010) Extended-connectivity fingerprints. Journal of Chemical Information and Modeling 50 (5), pp. 742–754. Note: PMID: 20426451 External Links: Document, Link Cited by: §2.
  • [14] B. Sanchez-Lengeling, C. Outeiral, G. L. Guimaraes, and A. Aspuru-Guzik (2017-08)

    Optimizing distributions over molecular space. an objective-reinforced generative adversarial network for inverse-design chemistry (organic)

    .
    ChemRxiv. External Links: Link, Document Cited by: §2.
  • [15] R. S. Sutton and A. G. Barto (1998) Reinforcement learning: an introduction cambridge. MA: MIT Press.[Google Scholar]. Cited by: §1.
  • [16] H. van Hasselt, A. Guez, and D. Silver (2015) Deep reinforcement learning with double q-learning. External Links: 1509.06461 Cited by: §2.
  • [17] Z. Wu, B. Ramsundar, E. N. Feinberg, J. Gomes, C. Geniesse, A. S. Pappu, K. Leswing, and V. Pande (2017) MoleculeNet: a benchmark for molecular machine learning. External Links: 1703.00564 Cited by: §3.
  • [18] R. Ying, D. Bourgeois, J. You, M. Zitnik, and J. Leskovec (2019) GNNExplainer: generating explanations for graph neural networks. External Links: 1903.03894 Cited by: §1, §2.
  • [19] Y. Zhang, D. Defazio, and A. Ramesh (2020) RelEx: a model-agnostic relational model explainer. External Links: 2006.00305 Cited by: §1.
  • [20] J. Zhou, G. Cui, Z. Zhang, C. Yang, Z. Liu, L. Wang, C. Li, and M. Sun (2018) Graph neural networks: a review of methods and applications. External Links: 1812.08434 Cited by: §1.
  • [21] Z. Zhou, S. Kearnes, L. Li, R. N. Zare, and P. Riley (2018) Optimization of molecules via deep reinforcement learning. External Links: 1810.08678 Cited by: §2.

Appendix A Additional Results

Molecule Target Prediction Similarity Reward
C0: Figure 6 -4.755 -4.5195 - 1.57
C1: Figure 6 - -2.6488 0.39 1.33
C2: Figure 6 - -3.0170 0.65 1.12
D0: Figure 6 NoTox NoTox (0.71) - -
D1: Figure 6 - Tox (0.86) 0.90 0.78
D2: Figure 6 - Tox (0.80) 0.91 0.73
E0: Figure 4 Tox Tox (0.78) - -
E1: Figure 4 - NoTox (0.94) 0.69 0.86
E2: Figure 4 - NoTox (0.84) 0.89 0.73
Table 2: Summary of other preliminary results.

Table 1 provides experimental results for three compounds, one of which belongs to ESOL (C0-2) and two to Tox21 (D0-2, E0-2). Visual feedback is shown in Figure 4-6-6. As before, sharpness of graph edges indicates GNNExplainer explanations, while counterfactual modifications have been colored in red.

We seek for counterfactuals for an ESOL test compound, whose predicted solubility is close to the actual target. In this case, the atom of sulphur seems to have a negative impact on the predicted aqueous solubility. In this regard, C1 increases the compund solubility by removing, indeed, the atom of sulphur. In nature, a molecule of sulphur (i.e, S8 in SMILES encoding) is known to be insoluble. Such an analysis can drop preliminary hints about how the trained model may have learned such characteristics. Similarly to C1, C2 added an atom of oxygen causing the predicted water solubility to increase.

((a)) E0
((b)) E1
((c)) E2
Figure 4: E1 modifies the cyclohexane ring, which was not considered important in the explanation provided by GNNExplainer for the original molecule E0. E2 breaks the bond highlighted in red.

Another significant examples comprises D0-2. In fact, D0 is correctly classified as a non-toxic compound. However, a simple addition of nitrogen makes the prediction change completely, resulting in classifying D1 and D2 as toxic with certainty of 86% and 80% respectively. Furthermore, sanity checks on GNNExplainer explanation for D0 emphasize that D2 updates a blurred explanation fragment (i.e, the atom of carbon attached to the atom of nitrogen nor its incident bonds have been considered important in D0). More interestingly, E0-2 present a potentially dangerous situation. In detail, starting from a toxic compound (E0), E1 achieves to be recognized as non-toxic by simply adding an atom of carbon, and so does E2 by breaking one of the rings, as shown in Figure 4. In this case, the usefulness of our counterfactuals can be exploited to the fullest, highlighting such difficulties of the model under consideration which is crucial in real-world applications.

((a)) C0 ((b)) C1 ((c)) C2
Figure 5: ESOL. C1 removes the atom of sulphur. C2, instead, adds a new atom of oxygen and connect it to the molecule through a double bond.
((a)) D0 ((b)) D1 ((c)) D2
Figure 6: Tox21. The agent adds atoms of nitrogen to the rightmost ring.