A Flexible Multi-Task Model for BERT Serving

07/12/2021
by   Tianwen Wei, et al.
0

In this demonstration, we present an efficient BERT-based multi-task (MT) framework that is particularly suitable for iterative and incremental development of the tasks. The proposed framework is based on the idea of partial fine-tuning, i.e. only fine-tune some top layers of BERT while keep the other layers frozen. For each task, we train independently a single-task (ST) model using partial fine-tuning. Then we compress the task-specific layers in each ST model using knowledge distillation. Those compressed ST models are finally merged into one MT model so that the frozen layers of the former are shared across the tasks. We exemplify our approach on eight GLUE tasks, demonstrating that it is able to achieve both strong performance and efficiency. We have implemented our method in the utterance understanding system of XiaoAI, a commercial AI assistant developed by Xiaomi. We estimate that our model reduces the overall serving cost by 86

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/10/2019

BAM! Born-Again Multi-Task Networks for Natural Language Understanding

It can be challenging to train multi-task neural networks that outperfor...
research
07/12/2020

HyperGrid: Efficient Multi-Task Transformers with Grid-wise Decomposable Hyper Projections

Achieving state-of-the-art performance on natural language understanding...
research
04/07/2020

Towards Non-task-specific Distillation of BERT via Sentence Representation Approximation

Recently, BERT has become an essential ingredient of various NLP deep mo...
research
01/27/2023

Can We Use Probing to Better Understand Fine-tuning and Knowledge Distillation of the BERT NLU?

In this article, we use probing to investigate phenomena that occur duri...
research
11/20/2022

UnifiedABSA: A Unified ABSA Framework Based on Multi-task Instruction Tuning

Aspect-Based Sentiment Analysis (ABSA) aims to provide fine-grained aspe...
research
10/10/2022

Multi-CLS BERT: An Efficient Alternative to Traditional Ensembling

Ensembling BERT models often significantly improves accuracy, but at the...
research
09/13/2021

GradTS: A Gradient-Based Automatic Auxiliary Task Selection Method Based on Transformer Networks

A key problem in multi-task learning (MTL) research is how to select hig...

Please sign up or login with your details

Forgot password? Click here to reset