ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT

04/27/2020
by   Omar Khattab, et al.
0

Recent progress in Natural Language Understanding (NLU) is driving fast-paced advances in Information Retrieval (IR), largely owed to fine-tuning deep language models (LMs) for document ranking. While remarkably effective, the ranking models based on these LMs increase computational cost by orders of magnitude over prior approaches, particularly as they must feed each query-document pair through a massive neural network to compute a single relevance score. To tackle this, we present ColBERT, a novel ranking model that adapts deep LMs (in particular, BERT) for efficient retrieval. ColBERT introduces a late interaction architecture that independently encodes the query and the document using BERT and then employs a cheap yet powerful interaction step that models their fine-grained similarity. By delaying and yet retaining this fine-granular interaction, ColBERT can leverage the expressiveness of deep LMs while simultaneously gaining the ability to pre-compute document representations offline, considerably speeding up query processing. Beyond reducing the cost of re-ranking the documents retrieved by a traditional model, ColBERT's pruning-friendly interaction mechanism enables leveraging vector-similarity indexes for end-to-end retrieval directly from a large document collection. We extensively evaluate ColBERT using two recent passage search datasets. Results show that ColBERT's effectiveness is competitive with existing BERT-based models (and outperforms every non-BERT baseline), while executing two orders-of-magnitude faster and requiring four orders-of-magnitude fewer FLOPs per query.

READ FULL TEXT

page 2

page 5

research
02/01/2022

Improving BERT-based Query-by-Document Retrieval with Multi-Task Optimization

Query-by-document (QBD) retrieval is an Information Retrieval task in wh...
research
05/09/2022

Long Document Re-ranking with Modular Re-ranker

Long document re-ranking has been a challenging problem for neural re-ra...
research
02/14/2020

TwinBERT: Distilling Knowledge to Twin-Structured BERT Models for Efficient Retrieval

Pre-trained language models like BERT have achieved great success in a w...
research
07/08/2019

Incorporating Query Term Independence Assumption for Efficient Retrieval and Ranking using Deep Neural Networks

Classical information retrieval (IR) methods, such as query likelihood a...
research
07/10/2019

Let's measure run time! Extending the IR replicability infrastructure to include performance aspects

Establishing a docker-based replicability infrastructure offers the comm...
research
01/26/2021

Regulatory Compliance through Doc2Doc Information Retrieval: A case study in EU/UK legislation where text similarity has limitations

Major scandals in corporate history have urged the need for regulatory c...

Please sign up or login with your details

Forgot password? Click here to reset