Reminding the Incremental Language Model via Data-Free Self-Distillation

10/17/2021
by   Han Wang, et al.
0

Incremental language learning with pseudo-data can alleviate catastrophic forgetting in neural networks. However, to obtain better performance, former methods have higher demands for pseudo-data of the previous tasks. The performance dramatically decreases when fewer pseudo-data are employed. In addition, the distribution of pseudo-data gradually deviates from the real data with the sequential learning of different tasks. The deviation will be greater with more tasks learned, which results in more serious catastrophic forgetting. To address these issues, we propose reminding incremental language model via data-free self-distillation (DFSD), which includes self-distillation based on the Earth Mover's Distance and hidden data augmentation. By estimating the knowledge distribution in all layers of GPT-2 and transforming it from teacher model to student model, the Self-distillation based on the Earth Mover's Distance can significantly reduce the demand for pseudo-data. Hidden data augmentation can greatly alleviate the catastrophic forgetting caused by deviations via modeling the generation of pseudo-data as a hidden data augmentation process, where each sample is a mixture of all trained task data. The experimental results demonstrate that our DFSD can exceed the previous state-of-the-art methods even if the maximum decrease in pseudo-data is 90

READ FULL TEXT
research
12/06/2022

Life-long Learning for Multilingual Neural Machine Translation with Knowledge Distillation

A common scenario of Multilingual Neural Machine Translation (MNMT) is t...
research
05/22/2022

RVAE-LAMOL: Residual Variational Autoencoder to Enhance Lifelong Language Learning

Lifelong Language Learning (LLL) aims to train a neural network to learn...
research
08/17/2022

Ask Question First for Enhancing Lifelong Language Learning

Lifelong language learning aims to stream learning NLP tasks while retai...
research
08/17/2023

Task Relation Distillation and Prototypical Pseudo Label for Incremental Named Entity Recognition

Incremental Named Entity Recognition (INER) involves the sequential lear...
research
10/25/2022

On Robust Incremental Learning over Many Multilingual Steps

Recent work in incremental learning has introduced diverse approaches to...
research
04/20/2023

eTag: Class-Incremental Learning with Embedding Distillation and Task-Oriented Generation

Class-Incremental Learning (CIL) aims to solve the neural networks' cata...
research
05/06/2022

Forget Less, Count Better: A Domain-Incremental Self-Distillation Learning Benchmark for Lifelong Crowd Counting

Crowd Counting has important applications in public safety and pandemic ...

Please sign up or login with your details

Forgot password? Click here to reset