Log In Sign Up

Meta Learning Backpropagation And Improving It

by   Louis Kirsch, et al.

Many concepts have been proposed for meta learning with neural networks (NNs), e.g., NNs that learn to control fast weights, hyper networks, learned learning rules, and meta recurrent NNs. Our Variable Shared Meta Learning (VS-ML) unifies the above and demonstrates that simple weight-sharing and sparsity in an NN is sufficient to express powerful learning algorithms (LAs) in a reusable fashion. A simple implementation of VS-ML called VS-ML RNN allows for implementing the backpropagation LA solely by running an RNN in forward-mode. It can even meta-learn new LAs that improve upon backpropagation and generalize to datasets outside of the meta training distribution without explicit gradient calculation. Introspection reveals that our meta-learned LAs learn qualitatively different from gradient descent through fast association.


page 3

page 7


Meta-Learning with Warped Gradient Descent

A versatile and effective approach to meta-learning is to infer a gradie...

Learning where to learn: Gradient sparsity in meta and continual learning

Finding neural network weights that generalize well from small datasets ...

Meta-Learning Bidirectional Update Rules

In this paper, we introduce a new type of generalized neural network whe...

ML-misfit: Learning a robust misfit function for full-waveform inversion using machine learning

Most of the available advanced misfit functions for full waveform invers...

A Modern Self-Referential Weight Matrix That Learns to Modify Itself

The weight matrix (WM) of a neural network (NN) is its program. The prog...

Meta Learning in Decentralized Neural Networks: Towards More General AI

Meta-learning usually refers to a learning algorithm that learns from ot...

Enabling Reproducibility and Meta-learning Through a Lifelong Database of Experiments (LDE)

Artificial Intelligence (AI) development is inherently iterative and exp...