CharacterBERT and Self-Teaching for Improving the Robustness of Dense Retrievers on Queries with Typos

04/01/2022
by   Shengyao Zhuang, et al.
0

Current dense retrievers are not robust to out-of-domain and outlier queries, i.e. their effectiveness on these queries is much poorer than what one would expect. In this paper, we consider a specific instance of such queries: queries that contain typos. We show that a small character level perturbation in queries (as caused by typos) highly impacts the effectiveness of dense retrievers. We then demonstrate that the root cause of this resides in the input tokenization strategy employed by BERT. In BERT, tokenization is performed using the BERT's WordPiece tokenizer and we show that a token with a typo will significantly change the token distributions obtained after tokenization. This distribution change translates to changes in the input embeddings passed to the BERT-based query encoder of dense retrievers. We then turn our attention to devising dense retriever methods that are robust to such queries with typos, while still being as performant as previous methods on queries without typos. For this, we use CharacterBERT as the backbone encoder and an efficient yet effective training method, called Self-Teaching (ST), that distills knowledge from queries without typos into the queries with typos. Experimental results show that CharacterBERT in combination with ST achieves significantly higher effectiveness on queries with typos compared to previous methods. Along with these results and the open-sourced implementation of the methods, we also provide a new passage retrieval dataset consisting of real-world queries with typos and associated relevance assessments on the MS MARCO corpus, thus supporting the research community in the investigation of effective and robust dense retrievers. Code, experimental results and dataset are made available at https://github.com/ielab/CharacterBERT-DR.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/27/2021

Dealing with Typos for BERT-based Passage Retrieval and Ranking

Passage retrieval and ranking is a key task in open-domain question answ...
research
06/17/2023

Typo-Robust Representation Learning for Dense Retrieval

Dense retrieval is a basic building block of information retrieval appli...
research
06/02/2022

What Are Expected Queries in End-to-End Object Detection?

End-to-end object detection is rapidly progressed after the emergence of...
research
04/17/2023

Typos-aware Bottlenecked Pre-Training for Robust Dense Retrieval

Current dense retrievers (DRs) are limited in their ability to effective...
research
05/10/2021

Few-Shot Conversational Dense Retrieval

Dense retrieval (DR) has the potential to resolve the query understandin...
research
08/13/2021

On Single and Multiple Representations in Dense Passage Retrieval

The advent of contextualised language models has brought gains in search...

Please sign up or login with your details

Forgot password? Click here to reset