Interpretable and Generalizable Graph Learning via Stochastic Attention Mechanism

01/31/2022
by   Siqi Miao, et al.
13

Interpretable graph learning is in need as many scientific applications depend on learning models to collect insights from graph-structured data. Previous works mostly focused on using post-hoc approaches to interpret a pre-trained model (graph neural network models in particular). They argue against inherently interpretable models because good interpretation of these models is often at the cost of their prediction accuracy. And, the widely used attention mechanism for inherent interpretation often fails to provide faithful interpretation in graph learning tasks. In this work, we address both issues by proposing Graph Stochastic Attention (GSAT), an attention mechanism derived from the information bottleneck principle. GSAT leverages stochastic attention to block the information from the task-irrelevant graph components while learning stochasticity-reduced attention to select the task-relevant subgraphs for interpretation. GSAT can also apply to fine-tuning and interpreting pre-trained models via stochastic attention mechanism. Extensive experiments on eight datasets show that GSAT outperforms the state-of-the-art methods by up to 20

READ FULL TEXT

page 2

page 20

research
06/06/2019

Towards Interpretable Reinforcement Learning Using Attention Augmented Agents

Inspired by recent work in attention models for image captioning and que...
research
05/24/2018

Uncertainty-Aware Attention for Reliable Interpretation and Prediction

Attention mechanism is effective in both focusing the deep learning mode...
research
06/18/2023

In-Process Global Interpretation for Graph Learning via Distribution Matching

Graphs neural networks (GNNs) have emerged as a powerful graph learning ...
research
10/30/2022

Interpretable Geometric Deep Learning via Learnable Randomness Injection

Point cloud data is ubiquitous in scientific fields. Recently, geometric...
research
01/15/2020

Graph-Bert: Only Attention is Needed for Learning Graph Representations

The dominant graph neural networks (GNNs) over-rely on the graph links, ...
research
12/18/2019

CoulGAT: An Experiment on Interpretability of Graph Attention Networks

We present an attention mechanism inspired from definition of screened C...
research
07/20/2018

Attention Models in Graphs: A Survey

Graph-structured data arise naturally in many different application doma...

Please sign up or login with your details

Forgot password? Click here to reset