XInsight: Revealing Model Insights for GNNs with Flow-based Explanations

06/07/2023
by   Eli Laird, et al.
0

Progress in graph neural networks has grown rapidly in recent years, with many new developments in drug discovery, medical diagnosis, and recommender systems. While this progress is significant, many networks are `black boxes' with little understanding of the `what' exactly the network is learning. Many high-stakes applications, such as drug discovery, require human-intelligible explanations from the models so that users can recognize errors and discover new knowledge. Therefore, the development of explainable AI algorithms is essential for us to reap the benefits of AI. We propose an explainability algorithm for GNNs called eXplainable Insight (XInsight) that generates a distribution of model explanations using GFlowNets. Since GFlowNets generate objects with probabilities proportional to a reward, XInsight can generate a diverse set of explanations, compared to previous methods that only learn the maximum reward sample. We demonstrate XInsight by generating explanations for GNNs trained on two graph classification tasks: classifying mutagenic compounds with the MUTAG dataset and classifying acyclic graphs with a synthetic dataset that we have open-sourced. We show the utility of XInsight's explanations by analyzing the generated compounds using QSAR modeling, and we find that XInsight generates compounds that cluster by lipophilicity, a known correlate of mutagenicity. Our results show that XInsight generates a distribution of explanations that uncovers the underlying relationships demonstrated by the model. They also highlight the importance of generating a diverse set of explanations, as it enables us to discover hidden relationships in the model and provides valuable guidance for further analysis.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/26/2021

Towards Self-Explainable Graph Neural Network

Graph Neural Networks (GNNs), which generalize the deep neural networks ...
research
06/03/2020

XGNN: Towards Model-Level Explanations of Graph Neural Networks

Graphs neural networks (GNNs) learn node features by aggregating and com...
research
07/25/2021

GCExplainer: Human-in-the-Loop Concept-based Explanations for Graph Neural Networks

While graph neural networks (GNNs) have been shown to perform well on gr...
research
09/28/2022

Learning to Explain Graph Neural Networks

Graph Neural Networks (GNNs) are a popular class of machine learning mod...
research
08/29/2023

How Faithful are Self-Explainable GNNs?

Self-explainable deep neural networks are a recent class of models that ...
research
12/23/2022

Detection, Explanation and Filtering of Cyber Attacks Combining Symbolic and Sub-Symbolic Methods

Machine learning (ML) on graph-structured data has recently received dee...
research
08/22/2023

Exploration of the Rashomon Set Assists Trustworthy Explanations for Medical Data

The machine learning modeling process conventionally culminates in selec...

Please sign up or login with your details

Forgot password? Click here to reset