Explaining GNN over Evolving Graphs using Information Flow

by   Yazheng Liu, et al.

Graphs are ubiquitous in many applications, such as social networks, knowledge graphs, smart grids, etc.. Graph neural networks (GNN) are the current state-of-the-art for these applications, and yet remain obscure to humans. Explaining the GNN predictions can add transparency. However, as many graphs are not static but continuously evolving, explaining changes in predictions between two graph snapshots is different but equally important. Prior methods only explain static predictions or generate coarse or irrelevant explanations for dynamic predictions. We define the problem of explaining evolving GNN predictions and propose an axiomatic attribution method to uniquely decompose the change in a prediction to paths on computation graphs. The attribution to many paths involving high-degree nodes is still not interpretable, while simply selecting the top important paths can be suboptimal in approximating the change. We formulate a novel convex optimization problem to optimally select the paths that explain the prediction evolution. Theoretically, we prove that the existing method based on Layer-Relevance-Propagation (LRP) is a special case of the proposed algorithm when an empty graph is compared with. Empirically, on seven graph datasets, with a novel metric designed for evaluating explanations of prediction change, we demonstrate the superiority of the proposed approach over existing methods, including LRP, DeepLIFT, and other path selection methods.



There are no comments yet.


page 10


Towards Automated Evaluation of Explanations in Graph Neural Networks

Explaining Graph Neural Networks predictions to end users of AI applicat...

SEEN: Sharpening Explanations for Graph Neural Networks using Explanations from Neighborhoods

Explaining the foundations for predictions obtained from graph neural ne...

GraphLIME: Local Interpretable Model Explanations for Graph Neural Networks

Graph structured data has wide applicability in various domains such as ...

XAI for Graphs: Explaining Graph Neural Network Predictions by Identifying Relevant Walks

Graph Neural Networks (GNNs) are a popular approach for predicting graph...

On Explainability of Graph Neural Networks via Subgraph Explorations

We consider the problem of explaining the predictions of graph neural ne...

Multi-objective Explanations of GNN Predictions

Graph Neural Network (GNN) has achieved state-of-the-art performance in ...

Preserve, Promote, or Attack? GNN Explanation via Topology Perturbation

Prior works on formalizing explanations of a graph neural network (GNN) ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.