An Efficiency Study for SPLADE Models

07/08/2022
by   Carlos Lassance, et al.
0

Latency and efficiency issues are often overlooked when evaluating IR models based on Pretrained Language Models (PLMs) in reason of multiple hardware and software testing scenarios. Nevertheless, efficiency is an important part of such systems and should not be overlooked. In this paper, we focus on improving the efficiency of the SPLADE model since it has achieved state-of-the-art zero-shot performance and competitive results on TREC collections. SPLADE efficiency can be controlled via a regularization factor, but solely controlling this regularization has been shown to not be efficient enough. In order to reduce the latency gap between SPLADE and traditional retrieval systems, we propose several techniques including L1 regularization for queries, a separation of document/query encoders, a FLOPS-regularized middle-training, and the use of faster query encoders. Our benchmark demonstrates that we can drastically improve the efficiency of these models while increasing the performance metrics on in-domain data. To our knowledge, we propose the first neural models that, under the same computing constraints, achieve similar latency (less than 4ms difference) as traditional BM25, while having similar performance (less than 10% MRR@10 reduction) as the state-of-the-art single-stage neural rankers on in-domain data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/20/2023

Query Performance Prediction for Neural IR: Are We There Yet?

Evaluation in Information Retrieval relies on post-hoc empirical procedu...
research
06/06/2022

No Parameter Left Behind: How Distillation and Model Size Affect Zero-Shot Retrieval

Recent work has shown that small distilled language models are strong co...
research
12/12/2022

In Defense of Cross-Encoders for Zero-Shot Retrieval

Bi-encoders and cross-encoders are widely used in many state-of-the-art ...
research
03/31/2023

Quick Dense Retrievers Consume KALE: Post Training Kullback Leibler Alignment of Embeddings for Asymmetrical dual encoders

In this paper, we consider the problem of improving the inference latenc...
research
05/05/2022

Toward A Fine-Grained Analysis of Distribution Shifts in MSMARCO

Recent IR approaches based on Pretrained Language Models (PLM) have now ...
research
02/14/2020

TwinBERT: Distilling Knowledge to Twin-Structured BERT Models for Efficient Retrieval

Pre-trained language models like BERT have achieved great success in a w...
research
11/02/2022

Multi-Vector Retrieval as Sparse Alignment

Multi-vector retrieval models improve over single-vector dual encoders o...

Please sign up or login with your details

Forgot password? Click here to reset