Ask Question First for Enhancing Lifelong Language Learning

08/17/2022
by   Han Wang, et al.
0

Lifelong language learning aims to stream learning NLP tasks while retaining knowledge of previous tasks. Previous works based on the language model and following data-free constraint approaches have explored formatting all data as "begin token (B) + context (C) + question (Q) + answer (A)" for different tasks. However, they still suffer from catastrophic forgetting and are exacerbated when the previous task's pseudo data is insufficient for the following reasons: (1) The model has difficulty generating task-corresponding pseudo data, and (2) A is prone to error when A and C are separated by Q because the information of the C is diminished before generating A. Therefore, we propose the Ask Question First and Replay Question (AQF-RQ), including a novel data format "BQCA" and a new training task to train pseudo questions of previous tasks. Experimental results demonstrate that AQF-RQ makes it easier for the model to generate more pseudo data that match corresponding tasks, and is more robust to both sufficient and insufficient pseudo-data when the task boundary is both clear and unclear. AQF-RQ can achieve only 0.36% lower performance than multi-task learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/22/2022

RVAE-LAMOL: Residual Variational Autoencoder to Enhance Lifelong Language Learning

Lifelong Language Learning (LLL) aims to train a neural network to learn...
research
10/17/2021

Reminding the Incremental Language Model via Data-Free Self-Distillation

Incremental language learning with pseudo-data can alleviate catastrophi...
research
12/06/2018

Pseudo-Rehearsal: Achieving Deep Reinforcement Learning without Catastrophic Forgetting

Neural networks can achieve extraordinary results on a wide variety of t...
research
09/07/2019

LAMAL: LAnguage Modeling Is All You Need for Lifelong Language Learning

Most research on lifelong learning (LLL) applies to images or games, but...
research
10/14/2022

Prompt Conditioned VAE: Enhancing Generative Replay for Lifelong Learning in Task-Oriented Dialogue

Lifelong learning (LL) is vital for advanced task-oriented dialogue (ToD...
research
01/24/2022

Unified Question Generation with Continual Lifelong Learning

Question Generation (QG), as a challenging Natural Language Processing t...
research
04/28/2020

Pseudo Rehearsal using non photo-realistic images

Deep Neural networks forget previously learnt tasks when they are faced ...

Please sign up or login with your details

Forgot password? Click here to reset