DeepAI AI Chat
Log In Sign Up

Learn-Prune-Share for Lifelong Learning

by   Zifeng Wang, et al.
Northeastern University

In lifelong learning, we wish to maintain and update a model (e.g., a neural network classifier) in the presence of new classification tasks that arrive sequentially. In this paper, we propose a learn-prune-share (LPS) algorithm which addresses the challenges of catastrophic forgetting, parsimony, and knowledge reuse simultaneously. LPS splits the network into task-specific partitions via an ADMM-based pruning strategy. This leads to no forgetting, while maintaining parsimony. Moreover, LPS integrates a novel selective knowledge sharing scheme into this ADMM optimization framework. This enables adaptive knowledge sharing in an end-to-end fashion. Comprehensive experimental results on two lifelong learning benchmark datasets and a challenging real-world radio frequency fingerprinting dataset are provided to demonstrate the effectiveness of our approach. Our experiments show that LPS consistently outperforms multiple state-of-the-art competitors.


page 1

page 2

page 3

page 4


Overcoming catastrophic forgetting in neural networks

The ability to learn tasks in a sequential fashion is crucial to the dev...

Learning to Continually Learn

Continual lifelong learning requires an agent or model to learn many seq...

Deep Bayesian Unsupervised Lifelong Learning

Lifelong Learning (LL) refers to the ability to continually learn and so...

Overcoming Catastrophic Forgetting in Convolutional Neural Networks by Selective Network Augmentation

Lifelong learning aims to develop machine learning systems that can lear...

Iterative Network Pruning with Uncertainty Regularization for Lifelong Sentiment Classification

Lifelong learning capabilities are crucial for sentiment classifiers to ...

An Application of Network Lasso Optimization For Ride Sharing Prediction

Ride sharing has important implications in terms of environmental, socia...

Revisiting Parameter Reuse to Overcome Catastrophic Forgetting in Neural Networks

Neural networks tend to forget previously learned knowledge when continu...