Learning to learn with backpropagation of Hebbian plasticity

09/08/2016
by   Thomas Miconi, et al.
0

Hebbian plasticity is a powerful principle that allows biological brains to learn from their lifetime experience. By contrast, artificial neural networks trained with backpropagation generally have fixed connection weights that do not change once training is complete. While recent methods can endow neural networks with long-term memories, Hebbian plasticity is currently not amenable to gradient descent. Here we derive analytical expressions for activity gradients in neural networks with Hebbian plastic connections. Using these expressions, we can use backpropagation to train not just the baseline weights of the connections, but also their plasticity. As a result, the networks "learn how to learn" in order to solve the problem at hand: the trained networks automatically perform fast learning of unpredictable environmental features during their lifetime, expanding the range of solvable problems. We test the algorithm on various on-line learning tasks, including pattern completion, one-shot learning, and reversal learning. The algorithm successfully learns how to learn the relevant associations from one-shot instruction, and fine-tunes the temporal dynamics of plasticity to allow for continual learning in response to changing environmental parameters. We conclude that backpropagation of Hebbian plasticity offers a powerful model for lifelong learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/06/2018

Differentiable plasticity: training plastic neural networks with backpropagation

How can we build agents that keep learning from experience, quickly and ...
research
06/16/2020

Learning to Learn with Feedback and Local Plasticity

Interest in biologically inspired alternatives to backpropagation is dri...
research
12/30/2014

Disjunctive Normal Networks

Artificial neural networks are powerful pattern classifiers; however, th...
research
10/27/2021

Learning where to learn: Gradient sparsity in meta and continual learning

Finding neural network weights that generalize well from small datasets ...
research
06/15/2021

Gradient-trained Weights in Wide Neural Networks Align Layerwise to Error-scaled Input Correlations

Recent works have examined how deep neural networks, which can solve a v...
research
11/07/2017

Sparse Attentive Backtracking: Long-Range Credit Assignment in Recurrent Networks

A major drawback of backpropagation through time (BPTT) is the difficult...
research
11/30/2020

Monadic Pavlovian associative learning in a backpropagation-free photonic network

Over a century ago, Ivan P. Pavlov, in a classic experiment, demonstrate...

Please sign up or login with your details

Forgot password? Click here to reset