Word Order Does Matter (And Shuffled Language Models Know It)

03/21/2022
by   Vinit Ravishankar, et al.
0

Recent studies have shown that language models pretrained and/or fine-tuned on randomly permuted sentences exhibit competitive performance on GLUE, putting into question the importance of word order information. Somewhat counter-intuitively, some of these studies also report that position embeddings appear to be crucial for models' good performance with shuffled text. We probe these language models for word order information and investigate what position embeddings learned from shuffled text encode, showing that these models retain information pertaining to the original, naturalistic word order. We show this is in part due to a subtlety in how shuffling is implemented in previous work – before rather than after subword segmentation. Surprisingly, we find even Language models trained on text shuffled after subword segmentation retain some semblance of information about word order because of the statistical dependencies between sentence length and unigram probabilities. Finally, we show that beyond GLUE, a variety of language understanding tasks do require word order information, often to an extent that cannot be learned through fine-tuning.

READ FULL TEXT

page 1

page 2

page 5

page 13

research
10/23/2022

The Curious Case of Absolute Position Embeddings

Transformer language models encode the notion of word order using positi...
research
04/11/2023

Towards preserving word order importance through Forced Invalidation

Large pre-trained language models such as BERT have been widely used as ...
research
02/08/2022

Do Language Models Learn Position-Role Mappings?

How is knowledge of position-role mappings in natural language learned? ...
research
02/28/2021

Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based Bias in NLP

When trained on large, unfiltered crawls from the internet, language mod...
research
06/27/2023

Can Pretrained Language Models Derive Correct Semantics from Corrupt Subwords under Noise?

For Pretrained Language Models (PLMs), their susceptibility to noise has...
research
07/29/2021

Demystifying Neural Language Models' Insensitivity to Word-Order

Recent research analyzing the sensitivity of natural language understand...
research
03/06/2023

Spelling convention sensitivity in neural language models

We examine whether large neural language models, trained on very large c...

Please sign up or login with your details

Forgot password? Click here to reset