DeepAI AI Chat
Log In Sign Up

Meta-learning with negative learning rates

02/01/2021
by   Alberto Bernacchia, et al.
0

Deep learning models require a large amount of data to perform well. When data is scarce for a target task, we can transfer the knowledge gained by training on similar tasks to quickly learn the target. A successful approach is meta-learning, or learning to learn a distribution of tasks, where learning is represented by an outer loop, and to learn by an inner loop of gradient descent. However, a number of recent empirical studies argue that the inner loop is unnecessary and more simple models work equally well or even better. We study the performance of MAML as a function of the learning rate of the inner loop, where zero learning rate implies that there is no inner loop. Using random matrix theory and exact solutions of linear models, we calculate an algebraic expression for the test loss of MAML applied to mixed linear regression and nonlinear regression with overparameterized models. Surprisingly, while the optimal learning rate for adaptation is positive, we find that the optimal learning rate for training is always negative, a setting that has never been considered before. Therefore, not only does the performance increase by decreasing the learning rate to zero, as suggested by recent work, but it can be increased even further by decreasing the learning rate to negative values. These results help clarify under what circumstances meta-learning performs best.

READ FULL TEXT

page 1

page 2

page 3

page 4

03/18/2022

Negative Inner-Loop Learning Rates Learn Universal Features

Model Agnostic Meta-Learning (MAML) consists of two optimization loops: ...
02/07/2021

Meta-Learning with Neural Tangent Kernels

Model Agnostic Meta-Learning (MAML) has emerged as a standard framework ...
10/31/2020

Meta-Learning with Adaptive Hyperparameters

Despite its popularity, several recent works question the effectiveness ...
09/08/2021

Do What Nature Did To Us: Evolving Plastic Recurrent Neural Networks For Task Generalization

While artificial neural networks (ANNs) have been widely adopted in mach...
04/19/2022

Metappearance: Meta-Learning for Visual Appearance Reproduction

There currently are two main approaches to reproducing visual appearance...
09/03/2020

Equal partners do better in defensive alliances

Cyclic dominance offers not just a way to maintain biodiversity, but als...
02/28/2022

Amortized Proximal Optimization

We propose a framework for online meta-optimization of parameters that g...