SDR: Efficient Neural Re-ranking using Succinct Document Representation

10/03/2021
by   Nachshon Cohen, et al.
0

BERT based ranking models have achieved superior performance on various information retrieval tasks. However, the large number of parameters and complex self-attention operation come at a significant latency overhead. To remedy this, recent works propose late-interaction architectures, which allow pre-computation of intermediate document representations, thus reducing the runtime latency. Nonetheless, having solved the immediate latency issue, these methods now introduce storage costs and network fetching latency, which limits their adoption in real-life production systems. In this work, we propose the Succinct Document Representation (SDR) scheme that computes highly compressed intermediate document representations, mitigating the storage/network issue. Our approach first reduces the dimension of token representations by encoding them using a novel autoencoder architecture that uses the document's textual content in both the encoding and decoding phases. After this token encoding step, we further reduce the size of entire document representations using a modern quantization technique. Extensive evaluations on passage re-reranking on the MSMARCO dataset show that compared to existing approaches using compressed document representations, our method is highly efficient, achieving 4x-11.6x better compression rates for the same ranking quality.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/29/2020

Efficient Document Re-Ranking for Transformers by Precomputing Term Representations

Deep pretrained transformer networks are effective at various ranking ta...
research
03/29/2022

Compact Token Representations with Contextual Quantization for Efficient Document Re-ranking

Transformer based re-ranking models can achieve high search relevance th...
research
04/29/2020

Expansion via Prediction of Importance with Contextualization

The identification of relevance with little textual context is a primary...
research
04/24/2021

Learning Passage Impacts for Inverted Indexes

Neural information retrieval systems typically use a cascading pipeline,...
research
04/28/2020

EARL: Speedup Transformer-based Rankers with Pre-computed Representation

Recent innovations in Transformer-based ranking models have advanced the...
research
03/24/2022

Introducing Neural Bag of Whole-Words with ColBERTer: Contextualized Late Interactions using Enhanced Reduction

Recent progress in neural information retrieval has demonstrated large g...
research
08/28/2023

MEMORY-VQ: Compression for Tractable Internet-Scale Memory

Retrieval augmentation is a powerful but expensive method to make langua...

Please sign up or login with your details

Forgot password? Click here to reset