Explaining GNN over Evolving Graphs using Information Flow

11/19/2021
by   Yazheng Liu, et al.
0

Graphs are ubiquitous in many applications, such as social networks, knowledge graphs, smart grids, etc.. Graph neural networks (GNN) are the current state-of-the-art for these applications, and yet remain obscure to humans. Explaining the GNN predictions can add transparency. However, as many graphs are not static but continuously evolving, explaining changes in predictions between two graph snapshots is different but equally important. Prior methods only explain static predictions or generate coarse or irrelevant explanations for dynamic predictions. We define the problem of explaining evolving GNN predictions and propose an axiomatic attribution method to uniquely decompose the change in a prediction to paths on computation graphs. The attribution to many paths involving high-degree nodes is still not interpretable, while simply selecting the top important paths can be suboptimal in approximating the change. We formulate a novel convex optimization problem to optimally select the paths that explain the prediction evolution. Theoretically, we prove that the existing method based on Layer-Relevance-Propagation (LRP) is a special case of the proposed algorithm when an empty graph is compared with. Empirically, on seven graph datasets, with a novel metric designed for evaluating explanations of prediction change, we demonstrate the superiority of the proposed approach over existing methods, including LRP, DeepLIFT, and other path selection methods.

READ FULL TEXT
research
03/17/2023

Distill n' Explain: explaining graph neural networks using simple surrogates

Explaining node predictions in graph neural networks (GNNs) often boils ...
research
06/22/2021

Towards Automated Evaluation of Explanations in Graph Neural Networks

Explaining Graph Neural Networks predictions to end users of AI applicat...
research
06/16/2021

SEEN: Sharpening Explanations for Graph Neural Networks using Explanations from Neighborhoods

Explaining the foundations for predictions obtained from graph neural ne...
research
01/17/2020

GraphLIME: Local Interpretable Model Explanations for Graph Neural Networks

Graph structured data has wide applicability in various domains such as ...
research
10/20/2022

Toward Multiple Specialty Learners for Explaining GNNs via Online Knowledge Distillation

Graph Neural Networks (GNNs) have become increasingly ubiquitous in nume...
research
06/07/2022

EiX-GNN : Concept-level eigencentrality explainer for graph neural networks

Explaining is a human knowledge transfer process regarding a phenomenon ...
research
04/04/2022

Explainable Online Lane Change Predictions on a Digital Twin with a Layer Normalized LSTM and Layer-wise Relevance Propagation

Artificial Intelligence and Digital Twins play an integral role in drivi...

Please sign up or login with your details

Forgot password? Click here to reset