Rigorously Assessing Natural Language Explanations of Neurons

09/19/2023
by   Jing Huang, et al.
0

Natural language is an appealing medium for explaining how large language models process and store information, but evaluating the faithfulness of such explanations is challenging. To help address this, we develop two modes of evaluation for natural language explanations that claim individual neurons represent a concept in a text input. In the observational mode, we evaluate claims that a neuron a activates on all and only input strings that refer to a concept picked out by the proposed explanation E. In the intervention mode, we construe E as a claim that the neuron a is a causal mediator of the concept denoted by E. We apply our framework to the GPT-4-generated explanations of GPT-2 XL neurons of Bills et al. (2023) and show that even the most confident explanations have high error rates and little to no causal efficacy. We close the paper by critically assessing whether natural language is a good choice for explanations and whether neurons are the best level of analysis.

READ FULL TEXT
research
06/13/2023

FLamE: Few-shot Learning from Natural Language Explanations

Natural language explanations have the potential to provide rich informa...
research
08/18/2021

I don't understand! Evaluation Methods for Natural Language Explanations

Explainability of intelligent systems is key for future adoption. While ...
research
05/23/2023

Process-To-Text: A Framework for the Quantitative Description of Processes in Natural Language

In this paper we present the Process-To-Text (P2T) framework for the aut...
research
06/24/2020

Compositional Explanations of Neurons

We describe a procedure for explaining neurons in deep representations b...
research
08/27/2023

Situated Natural Language Explanations

Natural language is among the most accessible tools for explaining decis...
research
07/04/2023

Concept-Based Explanations to Test for False Causal Relationships Learned by Abusive Language Classifiers

Classifiers tend to learn a false causal relationship between an over-re...
research
01/26/2022

Natural Language Descriptions of Deep Visual Features

Some neurons in deep networks specialize in recognizing highly specific ...

Please sign up or login with your details

Forgot password? Click here to reset