Prompt Conditioned VAE: Enhancing Generative Replay for Lifelong Learning in Task-Oriented Dialogue

10/14/2022
by   Yingxiu Zhao, et al.
0

Lifelong learning (LL) is vital for advanced task-oriented dialogue (ToD) systems. To address the catastrophic forgetting issue of LL, generative replay methods are widely employed to consolidate past knowledge with generated pseudo samples. However, most existing generative replay methods use only a single task-specific token to control their models. This scheme is usually not strong enough to constrain the generative model due to insufficient information involved. In this paper, we propose a novel method, prompt conditioned VAE for lifelong learning (PCLL), to enhance generative replay by incorporating tasks' statistics. PCLL captures task-specific distributions with a conditional variational autoencoder, conditioned on natural language prompts to guide the pseudo-sample generation. Moreover, it leverages a distillation process to further consolidate past knowledge by alleviating the noise in pseudo samples. Experiments on natural language understanding tasks of ToD systems demonstrate that PCLL significantly outperforms competitive baselines in building LL models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/13/2023

Continual Learning with Dirichlet Generative-based Rehearsal

Recent advancements in data-driven task-oriented dialogue systems (ToDs)...
research
05/22/2022

RVAE-LAMOL: Residual Variational Autoencoder to Enhance Lifelong Language Learning

Lifelong Language Learning (LLL) aims to train a neural network to learn...
research
03/11/2019

Complementary Learning for Overcoming Catastrophic Forgetting Using Experience Replay

Despite huge success, deep networks are unable to learn effectively in s...
research
01/17/2022

Lifelong Generative Learning via Knowledge Reconstruction

Generative models often incur the catastrophic forgetting problem when t...
research
01/09/2022

Robust and Resource-Efficient Data-Free Knowledge Distillation by Generative Pseudo Replay

Data-Free Knowledge Distillation (KD) allows knowledge transfer from a t...
research
08/17/2022

Ask Question First for Enhancing Lifelong Language Learning

Lifelong language learning aims to stream learning NLP tasks while retai...
research
07/28/2020

A Novel Token-Based Replay Technique to Speed Up Conformance Checking and Process Enhancement

Token-based replay used to be the standard way to conduct conformance ch...

Please sign up or login with your details

Forgot password? Click here to reset