Graph Neural Networks Including Sparse Interpretability

06/30/2020
by   Chris Lin, et al.
0

Graph Neural Networks (GNNs) are versatile, powerful machine learning methods that enable graph structure and feature representation learning, and have applications across many domains. For applications critically requiring interpretation, attention-based GNNs have been leveraged. However, these approaches either rely on specific model architectures or lack a joint consideration of graph structure and node features in their interpretation. Here we present a model-agnostic framework for interpreting important graph structure and node features, Graph neural networks Including SparSe inTerpretability (GISST). With any GNN model, GISST combines an attention mechanism and sparsity regularization to yield an important subgraph and node feature subset related to any graph-based task. Through a single self-attention layer, a GISST model learns an importance probability for each node feature and edge in the input graph. By including these importance probabilities in the model loss function, the probabilities are optimized end-to-end and tied to the task-specific performance. Furthermore, GISST sparsifies these importance probabilities with entropy and L1 regularization to reduce noise in the input graph topology and node features. Our GISST models achieve superior node feature and edge explanation precision in synthetic datasets, as compared to alternative interpretation approaches. Moreover, our GISST models are able to identify important graph structure in real-world datasets. We demonstrate in theory that edge feature importance and multiple edge types can be considered by incorporating them into the GISST edge probability computation. By jointly accounting for topology, node features, and edge features, GISST inherently provides simple and relevant interpretations for any GNN models and tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/10/2019

GNN Explainer: A Tool for Post-hoc Explanation of Graph Neural Networks

Graph Neural Networks (GNNs) are a powerful tool for machine learning on...
research
06/23/2021

Learnt Sparsification for Interpretable Graph Neural Networks

Graph neural networks (GNNs) have achieved great success on various task...
research
02/02/2020

Explain Graph Neural Networks to Understand Weighted Graph Features in Node Classification

Real data collected from different applications that have additional top...
research
12/16/2020

Edge Entropy as an Indicator of the Effectiveness of GNNs over CNNs for Node Classification

Graph neural networks (GNNs) extend convolutional neural networks (CNNs)...
research
06/15/2022

Taxonomy of Benchmarks in Graph Representation Learning

Graph Neural Networks (GNNs) extend the success of neural networks to gr...
research
07/06/2023

Generalizing Backpropagation for Gradient-Based Interpretability

Many popular feature-attribution methods for interpreting deep neural ne...
research
12/03/2021

Controversy Detection: a Text and Graph Neural Network Based Approach

Controversial content refers to any content that attracts both positive ...

Please sign up or login with your details

Forgot password? Click here to reset