Meta-Learning Bidirectional Update Rules

04/10/2021
by   Mark Sandler, et al.
11

In this paper, we introduce a new type of generalized neural network where neurons and synapses maintain multiple states. We show that classical gradient-based backpropagation in neural networks can be seen as a special case of a two-state network where one state is used for activations and another for gradients, with update rules derived from the chain rule. In our generalized framework, networks have neither explicit notion of nor ever receive gradients. The synapses and neurons are updated using a bidirectional Hebb-style update rule parameterized by a shared low-dimensional "genome". We show that such genomes can be meta-learned from scratch, using either conventional optimization techniques, or evolutionary strategies, such as CMA-ES. Resulting update rules generalize to unseen tasks and train faster than gradient descent based optimizers for several standard computer vision and synthetic tasks.

READ FULL TEXT

page 6

page 7

page 14

03/06/2020

Finding online neural update rules by learning to remember

We investigate learning of the online local update rules for neural acti...
12/29/2020

Meta Learning Backpropagation And Improving It

Many concepts have been proposed for meta learning with neural networks ...
08/30/2019

Meta-Learning with Warped Gradient Descent

A versatile and effective approach to meta-learning is to infer a gradie...
12/09/2016

Learning Representations by Stochastic Meta-Gradient Descent in Neural Networks

Representations are fundamental to artificial intelligence. The performa...
03/31/2018

Learning Unsupervised Learning Rules

A major goal of unsupervised learning is to discover data representation...
03/17/2021

Augmenting Supervised Learning by Meta-learning Unsupervised Local Rules

The brain performs unsupervised learning and (perhaps) simultaneous supe...
06/20/2021

Memory Augmented Optimizers for Deep Learning

Popular approaches for minimizing loss in data-driven learning often inv...