A Closer Look at Learned Optimization: Stability, Robustness, and Inductive Biases

09/22/2022
by   James Harrison, et al.
21

Learned optimizers – neural networks that are trained to act as optimizers – have the potential to dramatically accelerate training of machine learning models. However, even when meta-trained across thousands of tasks at huge computational expense, blackbox learned optimizers often struggle with stability and generalization when applied to tasks unlike those in their meta-training set. In this paper, we use tools from dynamical systems to investigate the inductive biases and stability properties of optimization algorithms, and apply the resulting insights to designing inductive biases for blackbox optimizers. Our investigation begins with a noisy quadratic model, where we characterize conditions in which optimization is stable, in terms of eigenvalues of the training dynamics. We then introduce simple modifications to a learned optimizer's architecture and meta-training procedure which lead to improved stability, and improve the optimizer's inductive bias. We apply the resulting learned optimizer to a variety of neural network training tasks, where it outperforms the current state of the art learned optimizer – at matched optimizer computational overhead – with regard to optimization performance and meta-training speed, and is capable of generalization to tasks far different from those it was meta-trained on.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/14/2017

Learned Optimizers that Scale and Generalize

Learning to learn has emerged as an important direction for achieving ar...
research
06/30/2020

Guarantees for Tuning the Step Size using a Learning-to-Learn Approach

Learning-to-learn (using optimization algorithms to learn a new optimize...
research
11/29/2022

Learning to Optimize with Dynamic Mode Decomposition

Designing faster optimization algorithms is of ever-growing interest. In...
research
06/06/2019

One ticket to win them all: generalizing lottery ticket initializations across datasets and optimizers

The success of lottery ticket initializations (Frankle and Carbin, 2019)...
research
12/06/2021

Noether Networks: Meta-Learning Useful Conserved Quantities

Progress in machine learning (ML) stems from a combination of data avail...
research
06/05/2019

Risks from Learned Optimization in Advanced Machine Learning Systems

We analyze the type of learned optimization that occurs when a learned m...
research
06/08/2019

Using learned optimizers to make models robust to input noise

State-of-the art vision models can achieve superhuman performance on ima...

Please sign up or login with your details

Forgot password? Click here to reset