Spelling convention sensitivity in neural language models

03/06/2023
by   Elizabeth Nielsen, et al.
0

We examine whether large neural language models, trained on very large collections of varied English text, learn the potentially long-distance dependency of British versus American spelling conventions, i.e., whether spelling is consistently one or the other within model-generated strings. In contrast to long-distance dependencies in non-surface underlying structure (e.g., syntax), spelling consistency is easier to measure both in LMs and the text corpora used to train them, which can provide additional insight into certain observed model behaviors. Using a set of probe words unique to either British or American English, we first establish that training corpora exhibit substantial (though not total) consistency. A large T5 language model does appear to internalize this consistency, though only with respect to observed lexical items (not nonce words with British/American spelling patterns). We further experiment with correcting for biases in the training data by fine-tuning T5 on synthetic data that has been debiased, and find that finetuned T5 remains only somewhat sensitive to spelling consistency. Further experiments show GPT2 to be similarly limited.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/20/2023

Language Model Behavior: A Comprehensive Survey

Transformer language models have received widespread public attention, y...
research
02/08/2023

Training-free Lexical Backdoor Attacks on Language Models

Large-scale language models have achieved tremendous success across vari...
research
05/03/2023

Entity Tracking in Language Models

Keeping track of how states and relations of entities change as a text o...
research
08/10/2020

Navigating Human Language Models with Synthetic Agents

Modern natural language models such as the GPT-2/GPT-3 contain tremendou...
research
03/21/2022

Word Order Does Matter (And Shuffled Language Models Know It)

Recent studies have shown that language models pretrained and/or fine-tu...
research
11/03/2022

Fine-Tuning Language Models via Epistemic Neural Networks

Large language models are now part of a powerful new paradigm in machine...
research
05/12/2023

TinyStories: How Small Can Language Models Be and Still Speak Coherent English?

Language models (LMs) are powerful tools for natural language processing...

Please sign up or login with your details

Forgot password? Click here to reset