Explainability Techniques for Graph Convolutional Networks

05/31/2019
by   Federico Baldassarre, et al.
0

Graph Networks are used to make decisions in potentially complex scenarios but it is usually not obvious how or why they made them. In this work, we study the explainability of Graph Network decisions using two main classes of techniques, gradient-based and decomposition-based, on a toy dataset and a chemistry task. Our study sets the ground for future development as well as application to real-world problems.

READ FULL TEXT

page 14

page 15

page 16

page 17

page 18

page 19

page 21

research
11/01/2021

Edge-Level Explanations for Graph Neural Networks by Extending Explainability Methods for Convolutional Neural Networks

Graph Neural Networks (GNNs) are deep learning models that take graph da...
research
07/23/2021

Robust Explainability: A Tutorial on Gradient-Based Attribution Methods for Deep Neural Networks

With the rise of deep neural networks, the challenge of explaining the p...
research
09/01/2023

Technical Companion to Example-Based Procedural Modeling Using Graph Grammars

This is a companion piece to my paper on "Example-Based Procedural Model...
research
11/03/2022

Exploring Explainability Methods for Graph Neural Networks

With the growing use of deep learning methods, particularly graph neural...
research
05/08/2020

Geometric graphs from data to aid classification tasks with graph convolutional networks

Classification is a classic problem in data analytics and has been appro...
research
09/25/2020

A Diagnostic Study of Explainability Techniques for Text Classification

Recent developments in machine learning have introduced models that appr...
research
02/10/2020

Explainable Deep RDFS Reasoner

Recent research efforts aiming to bridge the Neural-Symbolic gap for RDF...

Please sign up or login with your details

Forgot password? Click here to reset