Adapt-and-Distill: Developing Small, Fast and Effective Pretrained Language Models for Domains

06/25/2021
by   Yunzhi Yao, et al.
0

Large pre-trained models have achieved great success in many natural language processing tasks. However, when they are applied in specific domains, these models suffer from domain shift and bring challenges in fine-tuning and online serving for latency and capacity constraints. In this paper, we present a general approach to developing small, fast and effective pre-trained models for specific domains. This is achieved by adapting the off-the-shelf general pre-trained models and performing task-agnostic knowledge distillation in target domains. Specifically, we propose domain-specific vocabulary expansion in the adaptation stage and employ corpus level occurrence probability to choose the size of incremental vocabulary automatically. Then we systematically explore different strategies to compress the large pre-trained models for specific domains. We conduct our experiments in the biomedical and computer science domain. The experimental results demonstrate that our approach achieves better performance over the BERT BASE model in domain-specific tasks while 3.3x smaller and 5.1x faster than BERT BASE. The code and pre-trained models are available at https://aka.ms/adalm.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/31/2020

An Experimental Evaluation of Transformer-based Language Models in the Biomedical Domain

With the growing amount of text in health data, there have been rapid ad...
research
03/19/2021

Cost-effective Deployment of BERT Models in Serverless Environment

In this study we demonstrate the viability of deploying BERT-style model...
research
09/14/2022

Prompt Combines Paraphrase: Teaching Pre-trained Models to Understand Rare Biomedical Words

Prompt-based fine-tuning for pre-trained models has proven effective for...
research
04/06/2022

Paying More Attention to Self-attention: Improving Pre-trained Language Models via Attention Guiding

Pre-trained language models (PLM) have demonstrated their effectiveness ...
research
04/08/2020

CALM: Continuous Adaptive Learning for Language Modeling

Training large language representation models has become a standard in t...
research
04/06/2020

MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices

Natural Language Processing (NLP) has recently achieved great success by...
research
10/14/2020

AutoADR: Automatic Model Design for Ad Relevance

Large-scale pre-trained models have attracted extensive attention in the...

Please sign up or login with your details

Forgot password? Click here to reset