Diagnosing BERT with Retrieval Heuristics

01/12/2022
by   Arthur Câmara, et al.
0

Word embeddings, made widely popular in 2013 with the release of word2vec, have become a mainstay of NLP engineering pipelines. Recently, with the release of BERT, word embeddings have moved from the term-based embedding space to the contextual embedding space – each term is no longer represented by a single low-dimensional vector but instead each term and its context determine the vector weights. BERT's setup and architecture have been shown to be general enough to be applicable to many natural language tasks. Importantly for Information Retrieval (IR), in contrast to prior deep learning solutions to IR problems which required significant tuning of neural net architectures and training regimes, "vanilla BERT" has been shown to outperform existing retrieval algorithms by a wide margin, including on tasks and corpora that have long resisted retrieval effectiveness gains over traditional IR baselines (such as Robust04). In this paper, we employ the recently proposed axiomatic dataset analysis technique – that is, we create diagnostic datasets that each fulfil a retrieval heuristic (both term matching and semantic-based) – to explore what BERT is able to learn. In contrast to our expectations, we find BERT, when applied to a recently released large-scale web corpus with ad-hoc topics, to not adhere to any of the explored axioms. At the same time, BERT outperforms the traditional query likelihood retrieval model by 40%. This means that the axiomatic approach to IR (and its extension of diagnostic datasets created for retrieval heuristics) may in its current form not be applicable to large-scale corpora. Additional – different – axioms are needed.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/25/2016

Query Expansion with Locally-Trained Word Embeddings

Continuous space word embeddings have received a great deal of attention...
research
09/04/2019

Affect Enriched Word Embeddings for News Information Retrieval

Distributed representations of words have shown to be useful to improve ...
research
12/27/2021

"A Passage to India": Pre-trained Word Embeddings for Indian Languages

Dense word vectors or 'word embeddings' which encode semantic properties...
research
04/20/2021

B-PROP: Bootstrapped Pre-training with Representative Words Prediction for Ad-hoc Retrieval

Pre-training and fine-tuning have achieved remarkable success in many do...
research
12/30/2020

Deriving Contextualised Semantic Features from BERT (and Other Transformer Model) Embeddings

Models based on the transformer architecture, such as BERT, have marked ...
research
04/15/2021

UHD-BERT: Bucketed Ultra-High Dimensional Sparse Representations for Full Ranking

Neural information retrieval (IR) models are promising mainly because th...
research
12/17/2020

A White Box Analysis of ColBERT

Transformer-based models are nowadays state-of-the-art in ad-hoc Informa...

Please sign up or login with your details

Forgot password? Click here to reset