Lifelong Language Knowledge Distillation

10/05/2020
by   Yung-Sung Chuang, et al.
12

It is challenging to perform lifelong language learning (LLL) on a stream of different tasks without any performance degradation comparing to the multi-task counterparts. To address this issue, we present Lifelong Language Knowledge Distillation (L2KD), a simple but efficient method that can be easily applied to existing LLL architectures in order to mitigate the degradation. Specifically, when the LLL model is trained on a new task, we assign a teacher model to first learn the new task, and pass the knowledge to the LLL model via knowledge distillation. Therefore, the LLL model can better adapt to the new task while keeping the previously learned knowledge. Experiments show that the proposed L2KD consistently improves previous state-of-the-art models, and the degradation comparing to multi-task models in LLL tasks is well mitigated for both sequence generation and text classification tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/10/2019

BAM! Born-Again Multi-Task Networks for Natural Language Understanding

It can be challenging to train multi-task neural networks that outperfor...
research
11/09/2019

Attentive Student Meets Multi-Task Teacher: Improved Knowledge Distillation for Pretrained Models

In this paper, we explore the knowledge distillation approach under the ...
research
09/15/2020

Autoregressive Knowledge Distillation through Imitation Learning

The performance of autoregressive models on natural language generation ...
research
08/13/2023

Token-Scaled Logit Distillation for Ternary Weight Generative Language Models

Generative Language Models (GLMs) have shown impressive performance in t...
research
03/24/2022

Multitask Emotion Recognition Model with Knowledge Distillation and Task Discriminator

Due to the collection of big data and the development of deep learning, ...
research
05/22/2022

All Birds with One Stone: Multi-task Text Classification for Efficient Inference with One Forward Pass

Multi-Task Learning (MTL) models have shown their robustness, effectiven...
research
04/29/2022

Multiple Degradation and Reconstruction Network for Single Image Denoising via Knowledge Distillation

Single image denoising (SID) has achieved significant breakthroughs with...

Please sign up or login with your details

Forgot password? Click here to reset