Weight Friction: A Simple Method to Overcome Catastrophic Forgetting and Enable Continual Learning

08/02/2019
by   Gabrielle Liu, et al.
0

In recent years, deep neural networks have found success in replicating human-level cognitive skills, yet they suffer from several major obstacles. One significant limitation is the inability to learn new tasks without forgetting previously learned tasks, a shortcoming known as catastrophic forgetting. In this research, we propose a simple method to overcome catastrophic forgetting and enable continual learning in neural networks. We draw inspiration from principles in neurology and physics to develop the concept of weight friction. Weight friction operates by a modification to the update rule in the gradient descent optimization method. It converges at a rate comparable to that of the stochastic gradient descent algorithm and can operate over multiple task domains. It performs comparably to current methods while offering improvements in computation and memory efficiency.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/09/2021

Provable Continual Learning via Sketched Jacobian Approximations

An important problem in machine learning is the ability to learn tasks i...
research
06/21/2020

Generalisation Guarantees for Continual Learning with Orthogonal Gradient Descent

In continual learning settings, deep neural networks are prone to catast...
research
06/15/2021

Natural continual learning: success is a journey, not (just) a destination

Biological agents are known to learn many different tasks over the cours...
research
03/20/2019

Regularize, Expand and Compress: Multi-task based Lifelong Learning via NonExpansive AutoML

Lifelong learning, the problem of continual learning where tasks arrive ...
research
08/07/2023

Do You Remember? Overcoming Catastrophic Forgetting for Fake Audio Detection

Current fake audio detection algorithms have achieved promising performa...
research
05/24/2022

Thalamus: a brain-inspired algorithm for biologically-plausible continual learning and disentangled representations

Animals thrive in a constantly changing environment and leverage the tem...
research
10/15/2019

Orthogonal Gradient Descent for Continual Learning

Neural networks are achieving state of the art and sometimes super-human...

Please sign up or login with your details

Forgot password? Click here to reset