Iterative Network Pruning with Uncertainty Regularization for Lifelong Sentiment Classification

06/21/2021
by   Binzong Geng, et al.
0

Lifelong learning capabilities are crucial for sentiment classifiers to process continuous streams of opinioned information on the Web. However, performing lifelong learning is non-trivial for deep neural networks as continually training of incrementally available information inevitably results in catastrophic forgetting or interference. In this paper, we propose a novel iterative network pruning with uncertainty regularization method for lifelong sentiment classification (IPRLS), which leverages the principles of network pruning and weight regularization. By performing network pruning with uncertainty regularization in an iterative manner, IPRLS can adapta single BERT model to work with continuously arriving data from multiple domains while avoiding catastrophic forgetting and interference. Specifically, we leverage an iterative pruning method to remove redundant parameters in large deep networks so that the freed-up space can then be employed to learn new tasks, tackling the catastrophic forgetting problem. Instead of keeping the old-tasks fixed when learning new tasks, we also use an uncertainty regularization based on the Bayesian online learning framework to constrain the update of old tasks weights in BERT, which enables positive backward transfer, i.e. learning new tasks improves performance on past tasks while protecting old knowledge from being lost. In addition, we propose a task-specific low-dimensional residual function in parallel to each layer of BERT, which makes IPRLS less prone to losing the knowledge saved in the base BERT network when learning a new task. Extensive experiments on 16 popular review corpora demonstrate that the proposed IPRLS method sig-nificantly outperforms the strong baselines for lifelong sentiment classification. For reproducibility, we submit the code and data at:https://github.com/siat-nlp/IPRLS.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/17/2021

Continual Learning for Task-oriented Dialogue System with Iterative Network Pruning, Expanding and Masking

This ability to learn consecutive tasks without forgetting how to perfor...
research
11/15/2017

PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning

This paper presents a method for adding multiple tasks to a single deep ...
research
07/23/2019

Adaptive Compression-based Lifelong Learning

The problem of a deep learning model losing performance on a previously ...
research
03/27/2018

Bayesian Gradient Descent: Online Variational Bayes Learning with Increased Robustness to Catastrophic Forgetting and Weight Pruning

We suggest a novel approach for the estimation of the posterior distribu...
research
12/13/2020

Learn-Prune-Share for Lifelong Learning

In lifelong learning, we wish to maintain and update a model (e.g., a ne...
research
11/29/2022

Lifelong Person Re-Identification via Knowledge Refreshing and Consolidation

Lifelong person re-identification (LReID) is in significant demand for r...
research
07/22/2022

Revisiting Parameter Reuse to Overcome Catastrophic Forgetting in Neural Networks

Neural networks tend to forget previously learned knowledge when continu...

Please sign up or login with your details

Forgot password? Click here to reset