Meta-KD: A Meta Knowledge Distillation Framework for Language Model Compression across Domains

12/02/2020
by   Haojie Pan, et al.
0

Pre-trained language models have been applied to various NLP tasks with considerable performance gains. However, the large model sizes, together with the long inference time, limit the deployment of such models in real-time applications. Typical approaches consider knowledge distillation to distill large teacher models into small student models. However, most of these studies focus on single-domain only, which ignores the transferable knowledge from other domains. We argue that training a teacher with transferable knowledge digested across domains can achieve better generalization capability to help knowledge distillation. To this end, we propose a Meta-Knowledge Distillation (Meta-KD) framework to build a meta-teacher model that captures transferable knowledge across domains inspired by meta-learning and use it to pass knowledge to students. Specifically, we first leverage a cross-domain learning process to train the meta-teacher on multiple domains, and then propose a meta-distillation algorithm to learn single-domain student models with guidance from the meta-teacher. Experiments on two public multi-domain NLP tasks show the effectiveness and superiority of the proposed Meta-KD framework. We also demonstrate the capability of Meta-KD in both few-shot and zero-shot learning settings.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/20/2019

Zero-Shot Knowledge Distillation in Deep Networks

Knowledge distillation deals with the problem of training a smaller mode...
research
10/16/2021

HRKD: Hierarchical Relational Knowledge Distillation for Cross-domain Language Model Compression

On many natural language processing tasks, large pre-trained language mo...
research
01/20/2021

Learning to Augment for Data-Scarce Domain BERT Knowledge Distillation

Despite pre-trained language models such as BERT have achieved appealing...
research
08/23/2019

Well-Read Students Learn Better: The Impact of Student Initialization on Knowledge Distillation

Recent developments in NLP have been accompanied by large, expensive mod...
research
10/18/2022

Few-Shot Learning of Compact Models via Task-Specific Meta Distillation

We consider a new problem of few-shot learning of compact models. Meta-l...
research
06/29/2023

Understanding the Overfitting of the Episodic Meta-training

Despite the success of two-stage few-shot classification methods, in the...
research
07/07/2023

Distilling Universal and Joint Knowledge for Cross-Domain Model Compression on Time Series Data

For many real-world time series tasks, the computational complexity of p...

Please sign up or login with your details

Forgot password? Click here to reset