Demystifying Neural Language Models' Insensitivity to Word-Order

07/29/2021
by   Louis Clouâtre, et al.
0

Recent research analyzing the sensitivity of natural language understanding models to word-order perturbations have shown that the state-of-the-art models in several language tasks may have a unique way to understand the text that could seldom be explained with conventional syntax and semantics. In this paper, we investigate the insensitivity of natural language models to word-order by quantifying perturbations and analysing their effect on neural models' performance on language understanding tasks in GLUE benchmark. Towards that end, we propose two metrics - the Direct Neighbour Displacement (DND) and the Index Displacement Count (IDC) - that score the local and global ordering of tokens in the perturbed texts and observe that perturbation functions found in prior literature affect only the global ordering while the local ordering remains relatively unperturbed. We propose perturbations at the granularity of sub-words and characters to study the correlation between DND, IDC and the performance of neural language models on natural language tasks. We find that neural language models - pretrained and non-pretrained Transformers, LSTMs, and Convolutional architectures - require local ordering more so than the global ordering of tokens. The proposed metrics and the suite of perturbations allow a systematic way to study the (in)sensitivity of neural language understanding models to varying degree of perturbations.

READ FULL TEXT

page 6

page 9

page 10

research
09/24/2019

Do Massively Pretrained Language Models Make Better Storytellers?

Large neural language models trained on massive amounts of text have eme...
research
04/15/2021

Syntactic Perturbations Reveal Representational Correlates of Hierarchical Phrase Structure in Pretrained Language Models

While vector-based language representations from pretrained language mod...
research
09/15/2020

It's Not Just Size That Matters: Small Language Models Are Also Few-Shot Learners

When scaled to hundreds of billions of parameters, pretrained language m...
research
09/10/2021

Studying word order through iterative shuffling

As neural language models approach human performance on NLP benchmark ta...
research
03/21/2022

Word Order Does Matter (And Shuffled Language Models Know It)

Recent studies have shown that language models pretrained and/or fine-tu...
research
09/04/2023

A Comparative Analysis of Pretrained Language Models for Text-to-Speech

State-of-the-art text-to-speech (TTS) systems have utilized pretrained l...
research
09/28/2021

Shaking Syntactic Trees on the Sesame Street: Multilingual Probing with Controllable Perturbations

Recent research has adopted a new experimental field centered around the...

Please sign up or login with your details

Forgot password? Click here to reset