Do Language Models Learn Position-Role Mappings?

02/08/2022
by   Jackson Petty, et al.
9

How is knowledge of position-role mappings in natural language learned? We explore this question in a computational setting, testing whether a variety of well-performing pertained language models (BERT, RoBERTa, and DistilBERT) exhibit knowledge of these mappings, and whether this knowledge persists across alternations in syntactic, structural, and lexical alternations. In Experiment 1, we show that these neural models do indeed recognize distinctions between theme and recipient roles in ditransitive constructions, and that these distinct patterns are shared across construction type. We strengthen this finding in Experiment 2 by showing that fine-tuning these language models on novel theme- and recipient-like tokens in one paradigm allows the models to make correct predictions about their placement in other paradigms, suggesting that the knowledge of these mappings is shared rather than independently learned. We do, however, observe some limitations of this generalization when tasks involve constructions with novel ditransitive verbs, hinting at a degree of lexical specificity which underlies model performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/18/2020

Predicting metrical patterns in Spanish poetry with language models

In this paper, we compare automated metrical pattern identification syst...
research
03/21/2022

Word Order Does Matter (And Shuffled Language Models Know It)

Recent studies have shown that language models pretrained and/or fine-tu...
research
10/25/2022

Causal Analysis of Syntactic Agreement Neurons in Multilingual Language Models

Structural probing work has found evidence for latent syntactic informat...
research
10/21/2022

Syntactic Surprisal From Neural Models Predicts, But Underestimates, Human Processing Difficulty From Syntactic Ambiguities

Humans exhibit garden path effects: When reading sentences that are temp...
research
04/14/2021

Learning How to Ask: Querying LMs with Mixtures of Soft Prompts

Natural-language prompts have recently been used to coax pretrained lang...
research
05/29/2023

A Method for Studying Semantic Construal in Grammatical Constructions with Interpretable Contextual Embedding Spaces

We study semantic construal in grammatical constructions using large lan...
research
02/04/2023

Construction Grammar Provides Unique Insight into Neural Language Models

Construction Grammar (CxG) has recently been used as the basis for probi...

Please sign up or login with your details

Forgot password? Click here to reset