DeepAI AI Chat
Log In Sign Up

Continual Backprop: Stochastic Gradient Descent with Persistent Randomness

08/13/2021
by   Shibhansh Dohare, et al.
University of Alberta
0

The Backprop algorithm for learning in neural networks utilizes two mechanisms: first, stochastic gradient descent and second, initialization with small random weights, where the latter is essential to the effectiveness of the former. We show that in continual learning setups, Backprop performs well initially, but over time its performance degrades. Stochastic gradient descent alone is insufficient to learn continually; the initial randomness enables only initial learning but not continual learning. To the best of our knowledge, ours is the first result showing this degradation in Backprop's ability to learn. To address this issue, we propose an algorithm that continually injects random features alongside gradient descent using a new generate-and-test process. We call this the Continual Backprop algorithm. We show that, unlike Backprop, Continual Backprop is able to continually adapt in both supervised and reinforcement learning problems. We expect that as continual learning becomes more common in future applications, a method like Continual Backprop will be essential where the advantages of random initialization are present throughout learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

07/10/2022

Scaling the Number of Tasks in Continual Learning

Standard gradient descent algorithms applied to sequences of tasks are k...
06/17/2022

Debugging using Orthogonal Gradient Descent

In this report we consider the following problem: Given a trained model ...
06/14/2020

Continual General Chunking Problem and SyncMap

Humans possess an inherent ability to chunk sequences into their constit...
10/09/2020

Continual learning using hash-routed convolutional neural networks

Continual learning could shift the machine learning paradigm from data c...
06/28/2022

Hebbian Continual Representation Learning

Continual Learning aims to bring machine learning into a more realistic ...