Continual Learning for Task-oriented Dialogue System with Iterative Network Pruning, Expanding and Masking

07/17/2021
by   Binzong Geng, et al.
0

This ability to learn consecutive tasks without forgetting how to perform previously trained problems is essential for developing an online dialogue system. This paper proposes an effective continual learning for the task-oriented dialogue system with iterative network pruning, expanding and masking (TPEM), which preserves performance on previously encountered tasks while accelerating learning progress on subsequent tasks. Specifically, TPEM (i) leverages network pruning to keep the knowledge for old tasks, (ii) adopts network expanding to create free weights for new tasks, and (iii) introduces task-specific network masking to alleviate the negative impact of fixed weights of old tasks on new tasks. We conduct extensive experiments on seven different tasks from three benchmark datasets and show empirically that TPEM leads to significantly improved results over the strong competitors. For reproducibility, we submit the code and data at: https://github.com/siat-nlp/TPEM

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/31/2020

Continual Learning in Task-Oriented Dialogue Systems

Continual learning in task-oriented dialogue systems can allow us to add...
research
06/21/2021

Iterative Network Pruning with Uncertainty Regularization for Lifelong Sentiment Classification

Lifelong learning capabilities are crucial for sentiment classifiers to ...
research
04/28/2021

Preserving Earlier Knowledge in Continual Learning with the Help of All Previous Feature Extractors

Continual learning of new knowledge over time is one desirable capabilit...
research
03/11/2019

Continual Learning via Neural Pruning

We introduce Continual Learning via Neural Pruning (CLNP), a new method ...
research
07/23/2019

Adaptive Compression-based Lifelong Learning

The problem of a deep learning model losing performance on a previously ...
research
09/29/2020

One Person, One Model, One World: Learning Continual User Representation without Forgetting

Learning generic user representations which can then be applied to other...
research
05/16/2018

Progress & Compress: A scalable framework for continual learning

We introduce a conceptually simple and scalable framework for continual ...

Please sign up or login with your details

Forgot password? Click here to reset