In-Process Global Interpretation for Graph Learning via Distribution Matching

by   Yi Nian, et al.

Graphs neural networks (GNNs) have emerged as a powerful graph learning model due to their superior capacity in capturing critical graph patterns. To gain insights about the model mechanism for interpretable graph learning, previous efforts focus on post-hoc local interpretation by extracting the data pattern that a pre-trained GNN model uses to make an individual prediction. However, recent works show that post-hoc methods are highly sensitive to model initialization and local interpretation can only explain the model prediction specific to a particular instance. In this work, we address these limitations by answering an important question that is not yet studied: how to provide global interpretation of the model training procedure? We formulate this problem as in-process global interpretation, which targets on distilling high-level and human-intelligible patterns that dominate the training procedure of GNNs. We further propose Graph Distribution Matching (GDM) to synthesize interpretive graphs by matching the distribution of the original and interpretive graphs in the feature space of the GNN as its training proceeds. These few interpretive graphs demonstrate the most informative patterns the model captures during training. Extensive experiments on graph classification datasets demonstrate multiple advantages of the proposed method, including high explanation accuracy, time efficiency and the ability to reveal class-relevant structure.


page 1

page 2

page 3

page 4


Global Counterfactual Explainer for Graph Neural Networks

Graph neural networks (GNNs) find applications in various domains such a...

GNN Explainer: A Tool for Post-hoc Explanation of Graph Neural Networks

Graph Neural Networks (GNNs) are a powerful tool for machine learning on...

Graph Condensation via Receptive Field Distribution Matching

Graph neural networks (GNNs) enable the analysis of graphs using deep le...

Interpretable and Generalizable Graph Learning via Stochastic Attention Mechanism

Interpretable graph learning is in need as many scientific applications ...

Condensing Graphs via One-Step Gradient Matching

As training deep learning models on large dataset takes a lot of time an...

Sheaves as a Framework for Understanding and Interpreting Model Fit

As data grows in size and complexity, finding frameworks which aid in in...

Adversarial Infidelity Learning for Model Interpretation

Model interpretation is essential in data mining and knowledge discovery...

Please sign up or login with your details

Forgot password? Click here to reset