ORACLE: Order Robust Adaptive Continual LEarning

02/25/2019
by   Jaehong Yoon, et al.
1

The order of the tasks a continual learning model encounters may have large impact on the performance of each task, as well as the task-average performance. This order-sensitivity may cause serious problems in real-world scenarios where fairness plays a critical role (e.g. medical diagnosis). To tackle this problem, we propose a novel order-robust continual learning method, which instead of learning a completely shared set of weights, represent the parameters for each task as a sum of task-shared parameters that captures generic representations and task-adaptive parameters capturing task-specific ones, where the latter is factorized into sparse low-rank matrices in order to minimize capacity increase. With such parameter decomposition, when training for a new task, the task-adaptive parameters for earlier tasks remain mostly unaffected, where we update them only to reflect the changes made to the task-shared parameters. This prevents catastrophic forgetting for old tasks and at the same time make the model less sensitive to the task arrival order. We validate our Order-Robust Adaptive Continual LEarning (ORACLE) method on multiple benchmark datasets against state-of-the-art continual learning methods, and the results show that it largely outperforms those strong baselines with significantly less increase in capacity and training time, as well as obtains smaller performance disparity for each task with different order sequences.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/06/2020

Federated Continual Learning with Adaptive Parameter Communication

There has been a surge of interest in continual learning and federated l...
research
03/30/2020

Adaptive Group Sparse Regularization for Continual Learning

We propose a novel regularization-based continual learning method, dubbe...
research
06/06/2021

Boosting a Model Zoo for Multi-Task and Continual Learning

Leveraging data from multiple tasks, either all at once, or incrementall...
research
03/31/2020

Conditional Channel Gated Networks for Task-Aware Continual Learning

Convolutional Neural Networks experience catastrophic forgetting when op...
research
03/25/2021

Efficient Feature Transformations for Discriminative and Generative Continual Learning

As neural networks are increasingly being applied to real-world applicat...
research
03/20/2022

Continual Sequence Generation with Adaptive Compositional Modules

Continual learning is essential for real-world deployment when there is ...
research
09/03/2020

Compression-aware Continual Learning using Singular Value Decomposition

We propose a compression based continual task learning method that can d...

Please sign up or login with your details

Forgot password? Click here to reset