Rethinking Quadratic Regularizers: Explicit Movement Regularization for Continual Learning

02/04/2021
by   Ekdeep Singh Lubana, et al.
0

Quadratic regularizers are often used for mitigating catastrophic forgetting in deep neural networks (DNNs), but are unable to compete with recent continual learning methods. To understand this behavior, we analyze parameter updates under quadratic regularization and demonstrate such regularizers prevent forgetting of past tasks by implicitly performing a weighted average between current and previous values of model parameters. Our analysis shows the inferior performance of quadratic regularizers arises from (a) dependence of weighted averaging on training hyperparameters, which often results in unstable training and (b) assignment of lower importance to deeper layers, which are generally the cause for forgetting in DNNs. To address these limitations, we propose Explicit Movement Regularization (EMR), a continual learning algorithm that modifies quadratic regularization to remove the dependence of weighted averaging on training hyperparameters and uses a relative measure for importance to avoid problems caused by lower importance assignment to deeper layers. Compared to quadratic regularization, EMR achieves 6.2 accuracy and 4.5

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/11/2018

Meta Continual Learning

Using neural networks in practical settings would benefit from the abili...
research
02/19/2021

Condensed Composite Memory Continual Learning

Deep Neural Networks (DNNs) suffer from a rapid decrease in performance ...
research
06/12/2020

CPR: Classifier-Projection Regularization for Continual Learning

We propose a general, yet simple patch that can be applied to existing r...
research
08/08/2019

Continual Learning by Asymmetric Loss Approximation with Single-Side Overestimation

Catastrophic forgetting is a critical challenge in training deep neural ...
research
04/16/2020

Continual Learning with Extended Kronecker-factored Approximate Curvature

We propose a quadratic penalty method for continual learning of neural n...
research
06/15/2021

Natural continual learning: success is a journey, not (just) a destination

Biological agents are known to learn many different tasks over the cours...
research
04/17/2021

Lifelong Learning with Sketched Structural Regularization

Preventing catastrophic forgetting while continually learning new tasks ...

Please sign up or login with your details

Forgot password? Click here to reset