Competence-Based Analysis of Language Models

03/01/2023
by   Adam Davies, et al.
0

Despite the recent success of large pretrained language models (LMs) on a variety of prompting tasks, these models can be alarmingly brittle to small changes in inputs or application contexts. To better understand such behavior and motivate the design of more robust LMs, we propose a general experimental framework, CALM (Competence-based Analysis of Language Models), where targeted causal interventions are utilized to damage an LM's internal representation of various linguistic properties in order to evaluate its use of each representation in performing a given task. We implement these interventions as gradient-based adversarial attacks, which (in contrast to prior causal probing methodologies) are able to target arbitrarily-encoded representations of relational properties, and carry out a case study of this approach to analyze how BERT-like LMs use representations of several relational properties in performing associated relation prompting tasks. We find that, while the representations LMs leverage in performing each task are highly entangled, they may be meaningfully interpreted in terms of the tasks where they are most utilized; and more broadly, that CALM enables an expanded scope of inquiry in LM analysis that may be useful in predicting and explaining weaknesses of existing LMs.

READ FULL TEXT

page 7

page 13

research
05/28/2021

What if This Modified That? Syntactic Interventions via Counterfactual Embeddings

Neural language models exhibit impressive performance on a variety of ta...
research
09/01/2023

Why do universal adversarial attacks work on large language models?: Geometry might be the answer

Transformer based large language models with emergent capabilities are b...
research
04/20/2023

Interventional Probing in High Dimensions: An NLI Case Study

Probing strategies have been shown to detect the presence of various lin...
research
08/17/2023

Linearity of Relation Decoding in Transformer Language Models

Much of the knowledge encoded in transformer language models (LMs) may b...
research
05/14/2021

Counterfactual Interventions Reveal the Causal Effect of Relative Clause Representations on Agreement Prediction

When language models process syntactically complex sentences, do they us...
research
10/21/2022

A Causal Framework to Quantify the Robustness of Mathematical Reasoning with Language Models

We have recently witnessed a number of impressive results on hard mathem...
research
12/13/2021

Sparse Interventions in Language Models with Differentiable Masking

There has been a lot of interest in understanding what information is ca...

Please sign up or login with your details

Forgot password? Click here to reset