SOLA: Continual Learning with Second-Order Loss Approximation

06/19/2020
by   Dong Yin, et al.
17

Neural networks have achieved remarkable success in many cognitive tasks. However, when they are trained sequentially on multiple tasks without access to old data, it is observed that their performance on old tasks tend to drop significantly after the model is trained on new tasks. Continual learning aims to tackle this problem often referred to as catastrophic forgetting and to ensure sequential learning capability. We study continual learning from the perspective of loss landscapes and propose to construct a second-order Taylor approximation of the loss functions in previous tasks. Our proposed method does not require any memorization of raw data or their gradients, and therefore, offers better privacy protection. We theoretically analyze our algorithm from an optimization viewpoint and provide a sufficient and worst-case necessary condition for the gradient updates on the approximate loss function to be descent directions for the true loss function. Experiments on multiple continual learning benchmarks suggest that our method is effective in avoiding catastrophic forgetting and in many scenarios, outperforms several baseline algorithms that do not explicitly store the data samples.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/24/2022

Probing Representation Forgetting in Supervised and Unsupervised Continual Learning

Continual Learning research typically focuses on tackling the phenomenon...
research
08/08/2019

Continual Learning by Asymmetric Loss Approximation with Single-Side Overestimation

Catastrophic forgetting is a critical challenge in training deep neural ...
research
10/03/2022

Efficient Meta-Learning for Continual Learning with Taylor Expansion Approximation

Continual learning aims to alleviate catastrophic forgetting when handli...
research
11/28/2022

Progressive Learning without Forgetting

Learning from changing tasks and sequential experience without forgettin...
research
10/21/2021

Center Loss Regularization for Continual Learning

The ability to learn different tasks sequentially is essential to the de...
research
07/25/2022

Balancing Stability and Plasticity through Advanced Null Space in Continual Learning

Continual learning is a learning paradigm that learns tasks sequentially...
research
11/25/2020

Continual learning with direction-constrained optimization

This paper studies a new design of the optimization algorithm for traini...

Please sign up or login with your details

Forgot password? Click here to reset