Interpretability in the Wild: a Circuit for Indirect Object Identification in GPT-2 small

11/01/2022
by   Kevin Wang, et al.
0

Research in mechanistic interpretability seeks to explain behaviors of machine learning models in terms of their internal components. However, most previous work either focuses on simple behaviors in small models, or describes complicated behaviors in larger models with broad strokes. In this work, we bridge this gap by presenting an explanation for how GPT-2 small performs a natural language task called indirect object identification (IOI). Our explanation encompasses 26 attention heads grouped into 7 main classes, which we discovered using a combination of interpretability approaches relying on causal interventions. To our knowledge, this investigation is the largest end-to-end attempt at reverse-engineering a natural behavior "in the wild" in a language model. We evaluate the reliability of our explanation using three quantitative criteria–faithfulness, completeness and minimality. Though these criteria support our explanation, they also point to remaining gaps in our understanding. Our work provides evidence that a mechanistic understanding of large ML models is feasible, opening opportunities to scale our understanding to both larger models and more complex tasks.

READ FULL TEXT
research
05/10/2022

Sensible AI: Re-imagining Interpretability and Explainability using Sensemaking Theory

Understanding how ML models work is a prerequisite for responsibly desig...
research
07/21/2020

Melody: Generating and Visualizing Machine Learning Model Summary to Understand Data and Classifiers Together

With the increasing sophistication of machine learning models, there are...
research
04/28/2023

Towards Automated Circuit Discovery for Mechanistic Interpretability

Recent work in mechanistic interpretability has reverse-engineered nontr...
research
01/12/2023

Progress measures for grokking via mechanistic interpretability

Neural networks often exhibit emergent behavior, where qualitatively new...
research
11/18/2018

Regularized adversarial examples for model interpretability

As machine learning algorithms continue to improve, there is an increasi...
research
09/04/2018

Causal Explanation Analysis on Social Media

Understanding causal explanations - reasons given for happenings in one'...
research
10/22/2019

Automatic Extraction of Personality from Text: Challenges and Opportunities

In this study, we examined the possibility to extract personality traits...

Please sign up or login with your details

Forgot password? Click here to reset