A contrastive rule for meta-learning

04/04/2021
by   Nicolas Zucchet, et al.
0

Meta-learning algorithms leverage regularities that are present on a set of tasks to speed up and improve the performance of a subsidiary learning process. Recent work on deep neural networks has shown that prior gradient-based learning of meta-parameters can greatly improve the efficiency of subsequent learning. Here, we present a biologically plausible meta-learning algorithm based on equilibrium propagation. Instead of explicitly differentiating the learning process, our contrastive meta-learning rule estimates meta-parameter gradients by executing the subsidiary process more than once. This avoids reversing the learning dynamics in time and computing second-order derivatives. In spite of this, and unlike previous first-order methods, our rule recovers an arbitrarily accurate meta-parameter update given enough compute. We establish theoretical bounds on its performance and present experiments on a set of standard benchmarks and neural network architectures.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/19/2021

EvoGrad: Efficient Gradient-Based Meta-Learning and Hyperparameter Optimization

Gradient-based meta-learning and hyperparameter optimization have seen s...
research
07/04/2022

The least-control principle for learning at equilibrium

Equilibrium systems are a powerful way to express neural computations. A...
research
02/09/2020

Local Nonparametric Meta-Learning

A central goal of meta-learning is to find a learning rule that enables ...
research
10/24/2022

MARS: Meta-Learning as Score Matching in the Function Space

Meta-learning aims to extract useful inductive biases from a set of rela...
research
02/24/2021

Trajectory-Based Meta-Learning for Out-Of-Vocabulary Word Embedding Learning

Word embedding learning methods require a large number of occurrences of...
research
05/22/2018

Meta-Learning with Hessian Free Approach in Deep Neural Nets Training

Meta-learning is a promising method to achieve efficient training method...
research
09/25/2019

ES-MAML: Simple Hessian-Free Meta Learning

We introduce ES-MAML, a new framework for solving the model agnostic met...

Please sign up or login with your details

Forgot password? Click here to reset