Towards preserving word order importance through Forced Invalidation

04/11/2023
by   Hadeel Al-Negheimish, et al.
0

Large pre-trained language models such as BERT have been widely used as a framework for natural language understanding (NLU) tasks. However, recent findings have revealed that pre-trained language models are insensitive to word order. The performance on NLU tasks remains unchanged even after randomly permuting the word of a sentence, where crucial syntactic information is destroyed. To help preserve the importance of word order, we propose a simple approach called Forced Invalidation (FI): forcing the model to identify permuted sequences as invalid samples. We perform an extensive evaluation of our approach on various English NLU and QA based tasks over BERT-based and attention-based models over word embeddings. Our experiments demonstrate that Forced Invalidation significantly improves the sensitivity of the models to word order.

READ FULL TEXT

page 3

page 4

research
05/18/2023

Ditto: A Simple and Efficient Approach to Improve Sentence Embeddings

Prior studies diagnose the anisotropy problem in sentence representation...
research
08/09/2022

Compositional Evaluation on Japanese Textual Entailment and Similarity

Natural Language Inference (NLI) and Semantic Textual Similarity (STS) a...
research
09/11/2022

Probing for Understanding of English Verb Classes and Alternations in Large Pre-trained Language Models

We investigate the extent to which verb alternation classes, as describe...
research
09/28/2021

Shaking Syntactic Trees on the Sesame Street: Multilingual Probing with Controllable Perturbations

Recent research has adopted a new experimental field centered around the...
research
07/06/2022

The Role of Complex NLP in Transformers for Text Ranking?

Even though term-based methods such as BM25 provide strong baselines in ...
research
12/30/2020

Out of Order: How important is the sequential order of words in a sentence in Natural Language Understanding tasks?

Do state-of-the-art natural language understanding models care about wor...
research
03/21/2022

Word Order Does Matter (And Shuffled Language Models Know It)

Recent studies have shown that language models pretrained and/or fine-tu...

Please sign up or login with your details

Forgot password? Click here to reset