Causal Abstractions of Neural Networks

06/06/2021
by   Atticus Geiger, et al.
0

Structural analysis methods (e.g., probing and feature attribution) are increasingly important tools for neural network analysis. We propose a new structural analysis method grounded in a formal theory of causal abstraction that provides rich characterizations of model-internal representations and their roles in input/output behavior. In this method, neural representations are aligned with variables in interpretable causal models, and then interchange interventions are used to experimentally verify that the neural representations have the causal properties of their aligned variables. We apply this method in a case study to analyze neural models trained on Multiply Quantified Natural Language Inference (MQNLI) corpus, a highly complex NLI dataset that was constructed with a tree-structured natural logic causal model. We discover that a BERT-based model with state-of-the-art performance successfully realizes the approximate causal structure of the natural logic causal model, whereas a simpler baseline model fails to show any such structure, demonstrating that neural representations encode the compositional structure of MQNLI examples.

READ FULL TEXT

page 18

page 19

page 20

research
12/01/2021

Inducing Causal Structure for Interpretable Neural Networks

In many areas, we have well-founded insights about causal structure that...
research
10/21/2019

Discovering the Compositional Structure of Vector Representations with Role Learning Networks

Neural networks (NNs) are able to perform tasks that rely on composition...
research
02/06/2019

Neural Network Attributions: A Causal Perspective

We propose a new attribution method for neural networks developed using ...
research
03/29/2021

Compositional Abstraction Error and a Category of Causal Models

Interventional causal models describe joint distributions over some vari...
research
05/15/2023

Interpretability at Scale: Identifying Causal Mechanisms in Alpaca

Obtaining human-interpretable explanations of large, general-purpose lan...
research
05/15/2023

Causal Analysis for Robust Interpretability of Neural Networks

Interpreting the inner function of neural networks is crucial for the tr...
research
07/25/2017

Analogs of Linguistic Structure in Deep Representations

We investigate the compositional structure of message vectors computed b...

Please sign up or login with your details

Forgot password? Click here to reset