Generalizing Backpropagation for Gradient-Based Interpretability

07/06/2023
by   Kevin Du, et al.
0

Many popular feature-attribution methods for interpreting deep neural networks rely on computing the gradients of a model's output with respect to its inputs. While these methods can indicate which input features may be important for the model's prediction, they reveal little about the inner workings of the model itself. In this paper, we observe that the gradient computation of a model is a special case of a more general formulation using semirings. This observation allows us to generalize the backpropagation algorithm to efficiently compute other interpretable statistics about the gradient graph of a neural network, such as the highest-weighted path and entropy. We implement this generalized algorithm, evaluate it on synthetic datasets to better understand the statistics it computes, and apply it to study BERT's behavior on the subject-verb number agreement task (SVA). With this method, we (a) validate that the amount of gradient flow through a component of a model reflects its importance to a prediction and (b) for SVA, identify which pathways of the self-attention mechanism are most important.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/24/2023

Feature Gradient Flow for Interpreting Deep Neural Networks in Head and Neck Cancer Prediction

This paper introduces feature gradient flow, a new technique for interpr...
research
04/23/2020

Self-Attention Attribution: Interpreting Information Interactions Inside Transformer

The great success of Transformer-based models benefits from the powerful...
research
06/30/2020

Graph Neural Networks Including Sparse Interpretability

Graph Neural Networks (GNNs) are versatile, powerful machine learning me...
research
11/08/2016

Gradients of Counterfactuals

Gradients have been used to quantify feature importance in machine learn...
research
12/01/2020

Rethinking Positive Aggregation and Propagation of Gradients in Gradient-based Saliency Methods

Saliency methods interpret the prediction of a neural network by showing...
research
11/10/2020

DoLFIn: Distributions over Latent Features for Interpretability

Interpreting the inner workings of neural models is a key step in ensuri...
research
04/10/2021

Meta-Learning Bidirectional Update Rules

In this paper, we introduce a new type of generalized neural network whe...

Please sign up or login with your details

Forgot password? Click here to reset