TwinBERT: Distilling Knowledge to Twin-Structured BERT Models for Efficient Retrieval

02/14/2020
by   Wenhao Lu, et al.
0

Pre-trained language models like BERT have achieved great success in a wide variety of NLP tasks, while the superior performance comes with high demand in computational resources, which hinders the application in low-latency IR systems. We present TwinBERT model for effective and efficient retrieval, which has twin-structured BERT-like encoders to represent query and document respectively and a crossing layer to combine the embeddings and produce a similarity score. Different from BERT, where the two input sentences are concatenated and encoded together, TwinBERT decouples them during encoding and produces the embeddings for query and document independently, which allows document embeddings to be pre-computed offline and cached in memory. Thereupon, the computation left for run-time is from the query encoding and query-document crossing only. This single change can save large amount of computation time and resources, and therefore significantly improve serving efficiency. Moreover, a few well-designed network layers and training strategies are proposed to further reduce computational cost while at the same time keep the performance as remarkable as BERT model. Lastly, we develop two versions of TwinBERT for retrieval and relevance tasks correspondingly, and both of them achieve close or on-par performance to BERT-Base model. The model was trained following the teacher-student framework and evaluated with data from one of the major search engines. Experimental results showed that the inference time was significantly reduced and was firstly controlled around 20ms on CPUs while at the same time the performance gain from fine-tuned BERT-Base model was mostly retained. Integration of the models into production systems also demonstrated remarkable improvements on relevance metrics with negligible influence on latency.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/27/2020

ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT

Recent progress in Natural Language Understanding (NLU) is driving fast-...
research
09/24/2021

DACT-BERT: Differentiable Adaptive Computation Time for an Efficient BERT Inference

Large-scale pre-trained language models have shown remarkable results in...
research
05/22/2019

Deeper Text Understanding for IR with Contextual Neural Language Modeling

Neural networks provide new possibilities to automatically learn complex...
research
12/03/2021

Siamese BERT-based Model for Web Search Relevance Ranking Evaluated on a New Czech Dataset

Web search engines focus on serving highly relevant results within hundr...
research
10/13/2020

BERT-EMD: Many-to-Many Layer Mapping for BERT Compression with Earth Mover's Distance

Pre-trained language models (e.g., BERT) have achieved significant succe...
research
08/23/2021

Deploying a BERT-based Query-Title Relevance Classifier in a Production System: a View from the Trenches

The Bidirectional Encoder Representations from Transformers (BERT) model...
research
07/08/2022

An Efficiency Study for SPLADE Models

Latency and efficiency issues are often overlooked when evaluating IR mo...

Please sign up or login with your details

Forgot password? Click here to reset