MEG: Generating Molecular Counterfactual Explanations for Deep Graph Networks

by   Danilo Numeroso, et al.

Explainable AI (XAI) is a research area whose objective is to increase trustworthiness and to enlighten the hidden mechanism of opaque machine learning techniques. This becomes increasingly important in case such models are applied to the chemistry domain, for its potential impact on humans' health, e.g, toxicity analysis in pharmacology. In this paper, we present a novel approach to tackle explainability of deep graph networks in the context of molecule property prediction t asks, named MEG (Molecular Explanation Generator). We generate informative counterfactual explanations for a specific prediction under the form of (valid) compounds with high structural similarity and different predicted properties. Given a trained DGN, we train a reinforcement learning based generator to output counterfactual explanations. At each step, MEG feeds the current candidate counterfactual into the DGN, collects the prediction and uses it to reward the RL agent to guide the exploration. Furthermore, we restrict the action space of the agent in order to only keep actions that maintain the molecule in a valid state. We discuss the results showing how the model can convey non-ML experts with key insights into the learning model focus in the neighbourhood of a molecule.



There are no comments yet.


page 1

page 2

page 3

page 4


Explaining Deep Graph Networks with Molecular Counterfactuals

We present a novel approach to tackle explainability of deep graph netwo...

Counterfactual State Explanations for Reinforcement Learning Agents via Generative Deep Learning

Counterfactual explanations, which deal with "why not?" scenarios, can p...

Good Counterfactuals and Where to Find Them: A Case-Based Technique for Generating Counterfactuals for Explainable AI (XAI)

Recently, a groundswell of research has identified the use of counterfac...

Semantics and explanation: why counterfactual explanations produce adversarial examples in deep neural networks

Recent papers in explainable AI have made a compelling case for counterf...

Counterfactual Explanations for Machine Learning: A Review

Machine learning plays a role in many deployed decision systems, often i...

Did I do that? Blame as a means to identify controlled effects in reinforcement learning

Modeling controllable aspects of the environment enable better prioritiz...

ReLACE: Reinforcement Learning Agent for Counterfactual Explanations of Arbitrary Predictive Models

The demand for explainable machine learning (ML) models has been growing...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.