The Role of Complex NLP in Transformers for Text Ranking?

07/06/2022
by   David Rau, et al.
0

Even though term-based methods such as BM25 provide strong baselines in ranking, under certain conditions they are dominated by large pre-trained masked language models (MLMs) such as BERT. To date, the source of their effectiveness remains unclear. Is it their ability to truly understand the meaning through modeling syntactic aspects? We answer this by manipulating the input order and position information in a way that destroys the natural sequence order of query and passage and shows that the model still achieves comparable performance. Overall, our results highlight that syntactic aspects do not play a critical role in the effectiveness of re-ranking with BERT. We point to other mechanisms such as query-passage cross-attention and richer embeddings that capture word meanings based on aggregated context regardless of the word order for being the main attributions for its superior performance.

READ FULL TEXT
research
04/11/2023

Towards preserving word order importance through Forced Invalidation

Large pre-trained language models such as BERT have been widely used as ...
research
09/09/2022

Enhancing Pre-trained Models with Text Structure Knowledge for Question Generation

Today the pre-trained language models achieve great success for question...
research
04/25/2022

Groupwise Query Performance Prediction with BERT

While large-scale pre-trained language models like BERT have advanced th...
research
11/28/2019

Inducing Relational Knowledge from BERT

One of the most remarkable properties of word embeddings is the fact tha...
research
05/11/2021

BERT is to NLP what AlexNet is to CV: Can Pre-Trained Language Models Identify Analogies?

Analogies play a central role in human commonsense reasoning. The abilit...
research
06/11/2019

What Does BERT Look At? An Analysis of BERT's Attention

Large pre-trained neural networks such as BERT have had great recent suc...
research
09/19/2020

Prior Art Search and Reranking for Generated Patent Text

Generative models, such as GPT-2, have demonstrated impressive results r...

Please sign up or login with your details

Forgot password? Click here to reset