Continual Learning for Text Classification with Information Disentanglement Based Regularization

04/12/2021
by   Yufan Huang, et al.
10

Continual learning has become increasingly important as it enables NLP models to constantly learn and gain knowledge over time. Previous continual learning methods are mainly designed to preserve knowledge from previous tasks, without much emphasis on how to well generalize models to new tasks. In this work, we propose an information disentanglement based regularization method for continual learning on text classification. Our proposed method first disentangles text hidden spaces into representations that are generic to all tasks and representations specific to each individual task, and further regularizes these representations differently to better constrain the knowledge required to generalize. We also introduce two simple auxiliary tasks: next sentence prediction and task-id prediction, for learning better generic and specific representation spaces. Experiments conducted on large-scale benchmarks demonstrate the effectiveness of our method in continual text classification tasks with various sequences and lengths over state-of-the-art baselines. We have publicly released our code at https://github.com/GT-SALT/IDBR.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/12/2023

RepCL: Exploring Effective Representation for Continual Text Classification

Continual learning (CL) aims to constantly learn new knowledge over time...
research
07/27/2021

Continual Learning with Neuron Activation Importance

Continual learning is a concept of online learning with multiple sequent...
research
06/18/2022

NISPA: Neuro-Inspired Stability-Plasticity Adaptation for Continual Learning in Sparse Networks

The goal of continual learning (CL) is to learn different tasks over tim...
research
10/12/2022

Federated Continual Learning for Text Classification via Selective Inter-client Transfer

In this work, we combine the two paradigms: Federated Learning (FL) and ...
research
09/06/2022

Continual Learning: Fast and Slow

According to the Complementary Learning Systems (CLS) theory <cit.> in n...
research
12/17/2019

Direction Concentration Learning: Enhancing Congruency in Machine Learning

One of the well-known challenges in computer vision tasks is the visual ...
research
06/26/2021

Continual Learning via Inter-Task Synaptic Mapping

Learning from streaming tasks leads a model to catastrophically erase un...

Please sign up or login with your details

Forgot password? Click here to reset