Understanding Regularisation Methods for Continual Learning

06/11/2020
by   Frederik Benzing, et al.
0

The problem of Catastrophic Forgetting has received a lot of attention in the past years. An important class of proposed solutions are so-called regularisation approaches, which protect weights from large changes according to their importances. Various ways to measure this importance have been put forward, all stemming from different theoretical or intuitive motivations. We present mathematical and empirical evidence that two of these methods – Synaptic Intelligence and Memory Aware Synapses – approximate a rescaled version of the Fisher Information, a theoretically justified importance measure also used in the literature. As part of our methods, we show that the importance approximation of Synaptic Intelligence is biased and that, in fact, this bias explains its performance best. Altogether, our results offer a theoretical account for the effectiveness of different regularisation approaches and uncover similarities between the methods proposed so far.

READ FULL TEXT

page 8

page 24

research
08/29/2023

Incorporating Neuro-Inspired Adaptability for Continual Learning in Artificial Intelligence

Continual learning aims to empower artificial intelligence (AI) with str...
research
03/14/2023

Is forgetting less a good inductive bias for forward transfer?

One of the main motivations of studying continual learning is that the p...
research
11/27/2020

Association: Remind Your GAN not to Forget

Neural networks are susceptible to catastrophic forgetting. They fail to...
research
04/29/2023

The Ideal Continual Learner: An Agent That Never Forgets

The goal of continual learning is to find a model that solves multiple l...
research
06/08/2022

SYNERgy between SYNaptic consolidation and Experience Replay for general continual learning

Continual learning (CL) in the brain is facilitated by a complex set of ...
research
04/17/2021

Lifelong Learning with Sketched Structural Regularization

Preventing catastrophic forgetting while continually learning new tasks ...
research
08/15/2023

Fast Machine Unlearning Without Retraining Through Selective Synaptic Dampening

Machine unlearning, the ability for a machine learning model to forget, ...

Please sign up or login with your details

Forgot password? Click here to reset