DeepAI AI Chat
Log In Sign Up

Deploying a BERT-based Query-Title Relevance Classifier in a Production System: a View from the Trenches

by   Leonard Dahlmann, et al.

The Bidirectional Encoder Representations from Transformers (BERT) model has been radically improving the performance of many Natural Language Processing (NLP) tasks such as Text Classification and Named Entity Recognition (NER) applications. However, it is challenging to scale BERT for low-latency and high-throughput industrial use cases due to its enormous size. We successfully optimize a Query-Title Relevance (QTR) classifier for deployment via a compact model, which we name BERT Bidirectional Long Short-Term Memory (BertBiLSTM). The model is capable of inferring an input in at most 0.2ms on CPU. BertBiLSTM exceeds the off-the-shelf BERT model's performance in terms of accuracy and efficiency for the aforementioned real-world production task. We achieve this result in two phases. First, we create a pre-trained model, called eBERT, which is the original BERT architecture trained with our unique item title corpus. We then fine-tune eBERT for the QTR task. Second, we train the BertBiLSTM model to mimic the eBERT model's performance through a process called Knowledge Distillation (KD) and show the effect of data augmentation to achieve the resembling goal. Experimental results show that the proposed model outperforms other compact and production-ready models.


page 1

page 2

page 3

page 4


GottBERT: a pure German Language Model

Lately, pre-trained language models advanced the field of natural langua...

A More Efficient Chinese Named Entity Recognition base on BERT and Syntactic Analysis

We propose a new Named entity recognition (NER) method to effectively ma...

Cost-effective Deployment of BERT Models in Serverless Environment

In this study we demonstrate the viability of deploying BERT-style model...

Classification of US Supreme Court Cases using BERT-Based Techniques

Models based on bidirectional encoder representations from transformers ...

BoostingBERT:Integrating Multi-Class Boosting into BERT for NLP Tasks

As a pre-trained Transformer model, BERT (Bidirectional Encoder Represen...

A Comparison of LSTM and BERT for Small Corpus

Recent advancements in the NLP field showed that transfer learning helps...

TwinBERT: Distilling Knowledge to Twin-Structured BERT Models for Efficient Retrieval

Pre-trained language models like BERT have achieved great success in a w...