Compositional Explanations of Neurons

06/24/2020
by   Jesse Mu, et al.
28

We describe a procedure for explaining neurons in deep representations by identifying compositional logical concepts that closely approximate neuron behavior. Compared to prior work that uses atomic labels as explanations, analyzing neurons compositionally allows us to more precisely and expressively characterize their behavior. We use this procedure to answer several questions on interpretability in models for vision and natural language processing. First, we examine the kinds of abstractions learned by neurons. In image classification, we find that many neurons learn highly abstract but semantically coherent visual concepts, while other polysemantic neurons detect multiple unrelated features; in natural language inference (NLI), neurons learn shallow lexical heuristics from dataset biases. Second, we see whether compositional explanations give us insight into model performance: vision neurons that detect human-interpretable concepts are positively correlated with task performance, while NLI neurons that fire for shallow heuristics are negatively correlated with task performance. Finally, we show how compositional explanations provide an accessible way for end users to produce simple "copy-paste" adversarial examples that change model behavior in predictable ways.

READ FULL TEXT

page 2

page 5

page 7

page 12

page 13

page 14

page 15

page 16

research
01/26/2022

Natural Language Descriptions of Deep Visual Features

Some neurons in deep networks specialize in recognizing highly specific ...
research
09/16/2021

Detection Accuracy for Evaluating Compositional Explanations of Units

The recent success of deep learning models in solving complex problems a...
research
09/18/2023

On Model Explanations with Transferable Neural Pathways

Neural pathways as model explanations consist of a sparse set of neurons...
research
05/23/2022

Logical Reasoning with Span Predictions: Span-level Logical Atoms for Interpretable and Robust NLI Models

Current Natural Language Inference (NLI) models achieve impressive resul...
research
09/19/2023

Rigorously Assessing Natural Language Explanations of Neurons

Natural language is an appealing medium for explaining how large languag...
research
02/11/2018

Influence-Directed Explanations for Deep Convolutional Networks

We study the problem of explaining a rich class of behavioral properties...
research
11/16/2018

Analyzing Compositionality-Sensitivity of NLI Models

Success in natural language inference (NLI) should require a model to un...

Please sign up or login with your details

Forgot password? Click here to reset