CausaLM: Causal Model Explanation Through Counterfactual Language Models

05/27/2020
by   Amir Feder, et al.
6

Understanding predictions made by deep neural networks is notoriously difficult, but also crucial to their dissemination. As all ML-based methods, they are as good as their training data, and can also capture unwanted biases. While there are tools that can help understand whether such biases exist, they do not distinguish between correlation and causation, and might be ill-suited for text-based models and for reasoning about high level language concepts. A key problem of estimating the causal effect of a concept of interest on a given model is that this estimation requires the generation of counterfactual examples, which is challenging with existing generation technology. To bridge that gap, we propose CausaLM, a framework for producing causal model explanations using counterfactual language representation models. Our approach is based on fine-tuning of deep contextualized embedding models with auxiliary adversarial tasks derived from the causal graph of the problem. Concretely, we show that by carefully choosing auxiliary adversarial pre-training tasks, language representation models such as BERT can effectively learn a counterfactual representation for a given concept of interest, and be used to estimate its true causal effect on model performance. A byproduct of our method is a language representation model that is unaffected by the tested concept, which can be useful in mitigating unwanted bias ingrained in the data.

READ FULL TEXT

page 15

page 37

page 38

research
05/28/2021

What if This Modified That? Syntactic Interventions via Counterfactual Embeddings

Neural language models exhibit impressive performance on a variety of ta...
research
09/10/2021

Counterfactual Adversarial Learning with Representation Interpolation

Deep learning models exhibit a preference for statistical fitting over l...
research
05/27/2022

CEBaB: Estimating the Causal Effects of Real-World Concepts on NLP Model Behavior

The increasing size and complexity of modern ML systems has improved the...
research
07/27/2023

A Geometric Notion of Causal Probing

Large language models rely on real-valued representations of text to mak...
research
07/16/2019

Explaining Classifiers with Causal Concept Effect (CaCE)

How can we understand classification decisions made by deep neural nets?...
research
08/27/2021

Pulling Up by the Causal Bootstraps: Causal Data Augmentation for Pre-training Debiasing

Machine learning models achieve state-of-the-art performance on many sup...
research
12/20/2022

Debiasing Stance Detection Models with Counterfactual Reasoning and Adversarial Bias Learning

Stance detection models may tend to rely on dataset bias in the text par...

Please sign up or login with your details

Forgot password? Click here to reset