Language model acceptability judgements are not always robust to context

12/18/2022
by   Koustuv Sinha, et al.
11

Targeted syntactic evaluations of language models ask whether models show stable preferences for syntactically acceptable content over minimal-pair unacceptable inputs. Most targeted syntactic evaluation datasets ask models to make these judgements with just a single context-free sentence as input. This does not match language models' training regime, in which input sentences are always highly contextualized by the surrounding corpus. This mismatch raises an important question: how robust are models' syntactic judgements in different contexts? In this paper, we investigate the stability of language models' performance on targeted syntactic evaluations as we vary properties of the input context: the length of the context, the types of syntactic phenomena it contains, and whether or not there are violations of grammaticality. We find that model judgements are generally robust when placed in randomly sampled linguistic contexts. However, they are substantially unstable for contexts containing syntactic structures matching those in the critical test content. Among all tested models (GPT-2 and five variants of OPT), we significantly improve models' judgements by providing contexts with matching syntactic structures, and conversely significantly worsen them using unacceptable contexts with matching but violated syntactic structures. This effect is amplified by the length of the context, except for unrelated inputs. We show that these changes in model performance are not explainable by simple features matching the context and the test inputs, such as lexical overlap and dependency overlap. This sensitivity to highly specific syntactic features of the context can only be explained by the models' implicit in-context learning abilities.

READ FULL TEXT

page 12

page 14

research
06/10/2021

Causal Analysis of Syntactic Agreement Mechanisms in Neural Language Models

Targeted syntactic evaluations have demonstrated the ability of language...
research
07/06/2023

Lost in the Middle: How Language Models Use Long Contexts

While recent language models have the ability to take long contexts as i...
research
06/01/2018

Some of Them Can be Guessed! Exploring the Effect of Linguistic Context in Predicting Quantifiers

We study the role of linguistic context in predicting quantifiers (`few'...
research
04/19/2021

Refining Targeted Syntactic Evaluation of Language Models

Targeted syntactic evaluation of subject-verb number agreement in Englis...
research
06/06/2021

A Targeted Assessment of Incremental Processing in Neural LanguageModels and Humans

We present a targeted, scaled-up comparison of incremental processing in...
research
09/16/2021

The Language Model Understood the Prompt was Ambiguous: Probing Syntactic Uncertainty Through Generation

Temporary syntactic ambiguities arise when the beginning of a sentence i...
research
10/12/2020

Structural Supervision Improves Few-Shot Learning and Syntactic Generalization in Neural Language Models

Humans can learn structural properties about a word from minimal experie...

Please sign up or login with your details

Forgot password? Click here to reset