A History of Meta-gradient: Gradient Methods for Meta-learning

02/20/2022
by   Richard S. Sutton, et al.
0

The history of meta-learning methods based on gradient descent is reviewed, focusing primarily on methods that adapt step-size (learning rate) meta-parameters.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/05/2022

Meta Mirror Descent: Optimiser Learning for Fast Convergence

Optimisers are an essential component for training machine learning mode...
research
10/30/2021

One Step at a Time: Pros and Cons of Multi-Step Meta-Gradient Reinforcement Learning

Self-tuning algorithms that adapt the learning process online encourage ...
research
02/28/2022

Amortized Proximal Optimization

We propose a framework for online meta-optimization of parameters that g...
research
03/08/2019

Learning Feature Relevance Through Step Size Adaptation in Temporal-Difference Learning

There is a long history of using meta learning as representation learnin...
research
10/31/2017

Meta-Learning and Universality: Deep Representations and Gradient Descent can Approximate any Learning Algorithm

Learning to learn is a powerful paradigm for enabling models to learn fr...
research
05/27/2022

Incorporating the Barzilai-Borwein Adaptive Step Size into Sugradient Methods for Deep Network Training

In this paper, we incorporate the Barzilai-Borwein step size into gradie...
research
06/19/2020

Meta Learning in the Continuous Time Limit

In this paper, we establish the ordinary differential equation (ODE) tha...

Please sign up or login with your details

Forgot password? Click here to reset