Global Concept-Based Interpretability for Graph Neural Networks via Neuron Analysis

08/22/2022
by   Han Xuanyuan, et al.
0

Graph neural networks (GNNs) are highly effective on a variety of graph-related tasks; however, they lack interpretability and transparency. Current explainability approaches are typically local and treat GNNs as black-boxes. They do not look inside the model, inhibiting human trust in the model and explanations. Motivated by the ability of neurons to detect high-level semantic concepts in vision models, we perform a novel analysis on the behaviour of individual GNN neurons to answer questions about GNN interpretability, and propose new metrics for evaluating the interpretability of GNN neurons. We propose a novel approach for producing global explanations for GNNs using neuron-level concepts to enable practitioners to have a high-level view of the model. Specifically, (i) to the best of our knowledge, this is the first work which shows that GNN neurons act as concept detectors and have strong alignment with concepts formulated as logical compositions of node degree and neighbourhood properties; (ii) we quantitatively assess the importance of detected concepts, and identify a trade-off between training duration and neuron-level interpretability; (iii) we demonstrate that our global explainability approach has advantages over the current state-of-the-art – we can disentangle the explanation into individual interpretable concepts backed by logical descriptions, which reduces potential for bias and improves user-friendliness.

READ FULL TEXT
research
07/15/2021

Algorithmic Concept-based Explainable Reasoning

Recent research on graph neural network (GNN) models successfully applie...
research
07/25/2021

GCExplainer: Human-in-the-Loop Concept-based Explanations for Graph Neural Networks

While graph neural networks (GNNs) have been shown to perform well on gr...
research
11/13/2022

Generalization Beyond Feature Alignment: Concept Activation-Guided Contrastive Learning

Learning invariant representations via contrastive learning has seen sta...
research
10/13/2022

Global Explainability of GNNs via Logic Combination of Learned Concepts

While instance-level explanation of GNN is a well-studied problem with p...
research
09/07/2023

Automatic Concept Embedding Model (ACEM): No train-time concepts, No issue!

Interpretability and explainability of neural networks is continuously i...
research
05/31/2022

GlanceNets: Interpretabile, Leak-proof Concept-based Models

There is growing interest in concept-based models (CBMs) that combine hi...
research
02/09/2023

GCI: A (G)raph (C)oncept (I)nterpretation Framework

Explainable AI (XAI) underwent a recent surge in research on concept ext...

Please sign up or login with your details

Forgot password? Click here to reset