DeepAI AI Chat
Log In Sign Up

Meta Continual Learning

06/11/2018
by   Risto Vuorio, et al.
sktbrain.com
0

Using neural networks in practical settings would benefit from the ability of the networks to learn new tasks throughout their lifetimes without forgetting the previous tasks. This ability is limited in the current deep neural networks by a problem called catastrophic forgetting, where training on new tasks tends to severely degrade performance on previous tasks. One way to lessen the impact of the forgetting problem is to constrain parameters that are important to previous tasks to stay close to the optimal parameters. Recently, multiple competitive approaches for computing the importance of the parameters with respect to the previous tasks have been presented. In this paper, we propose a learning to optimize algorithm for mitigating catastrophic forgetting. Instead of trying to formulate a new constraint function ourselves, we propose to train another neural network to predict parameter update steps that respect the importance of parameters to the previous tasks. In the proposed meta-training scheme, the update predictor is trained to minimize loss on a combination of current and past tasks. We show experimentally that the proposed approach works in the continual learning setting.

READ FULL TEXT

page 1

page 2

page 3

page 4

02/22/2021

Understanding Catastrophic Forgetting and Remembering in Continual Learning with Optimal Relevance Mapping

Catastrophic forgetting in neural networks is a significant problem for ...
02/04/2021

Rethinking Quadratic Regularizers: Explicit Movement Regularization for Continual Learning

Quadratic regularizers are often used for mitigating catastrophic forget...
06/16/2019

Conditional Computation for Continual Learning

Catastrophic forgetting of connectionist neural networks is caused by th...
06/01/2023

Out-of-distribution forgetting: vulnerability of continual learning to intra-class distribution shift

Continual learning (CL) is an important technique to allow artificial ne...
03/12/2021

Training Networks in Null Space of Feature Covariance for Continual Learning

In the setting of continual learning, a network is trained on a sequence...
05/12/2022

KASAM: Spline Additive Models for Function Approximation

Neural networks have been criticised for their inability to perform cont...
11/13/2020

Continual Learning with Deep Artificial Neurons

Neurons in real brains are enormously complex computational units. Among...