Regularize, Expand and Compress: Multi-task based Lifelong Learning via NonExpansive AutoML

03/20/2019
by   Jie Zhang, et al.
20

Lifelong learning, the problem of continual learning where tasks arrive in sequence, has been lately attracting more attention in the computer vision community. The aim of lifelong learning is to develop a system that can learn new tasks while maintaining the performance on the previously learned tasks. However, there are two obstacles for lifelong learning of deep neural networks: catastrophic forgetting and capacity limitation. To solve the above issues, inspired by the recent breakthroughs in automatically learning good neural network architectures, we develop a Multi-task based lifelong learning via nonexpansive AutoML framework termed Regularize, Expand and Compress (REC). REC is composed of three stages: 1) continually learns the sequential tasks without the learned tasks' data via a newly proposed multi-task weight consolidation (MWC) algorithm; 2) expands the network to help the lifelong learning with potentially improved model capability and performance by network-transformation based AutoML; 3) compresses the expanded model after learning every new task to maintain model efficiency and performance. The proposed MWC and REC algorithms achieve superior performance over other lifelong learning algorithms on four different datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/02/2019

Weight Friction: A Simple Method to Overcome Catastrophic Forgetting and Enable Continual Learning

In recent years, deep neural networks have found success in replicating ...
research
06/11/2021

A Novel Approach to Lifelong Learning: The Plastic Support Structure

We propose a novel approach to lifelong learning, introducing a compact ...
research
04/24/2020

Dropout as an Implicit Gating Mechanism For Continual Learning

In recent years, neural networks have demonstrated an outstanding abilit...
research
10/10/2019

Learning to Remember from a Multi-Task Teacher

Recent studies on catastrophic forgetting during sequential learning typ...
research
06/07/2016

Active Long Term Memory Networks

Continual Learning in artificial neural networks suffers from interferen...
research
09/15/2021

Life-Long Multi-Task Learning of Adaptive Path Tracking Policy for Autonomous Vehicle

This paper proposes a life-long adaptive path tracking policy learning m...
research
08/04/2021

Deep multi-task mining Calabi-Yau four-folds

We continue earlier efforts in computing the dimensions of tangent space...

Please sign up or login with your details

Forgot password? Click here to reset