Online Hyperparameter Meta-Learning with Hypergradient Distillation

10/06/2021
by   Hae Beom Lee, et al.
0

Many gradient-based meta-learning methods assume a set of parameters that do not participate in inner-optimization, which can be considered as hyperparameters. Although such hyperparameters can be optimized using the existing gradient-based hyperparameter optimization (HO) methods, they suffer from the following issues. Unrolled differentiation methods do not scale well to high-dimensional hyperparameters or horizon length, Implicit Function Theorem (IFT) based methods are restrictive for online optimization, and short horizon approximations suffer from short horizon bias. In this work, we propose a novel HO method that can overcome these limitations, by approximating the second-order term with knowledge distillation. Specifically, we parameterize a single Jacobian-vector product (JVP) for each HO step and minimize the distance from the true second-order term. Our method allows online optimization and also is scalable to the hyperparameter dimension and the horizon length. We demonstrate the effectiveness of our method on two different meta-learning methods and three benchmark datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/19/2021

EvoGrad: Efficient Gradient-Based Meta-Learning and Hyperparameter Optimization

Gradient-based meta-learning and hyperparameter optimization have seen s...
research
03/06/2018

Understanding Short-Horizon Bias in Stochastic Meta-Optimization

Careful tuning of the learning rate, or even schedules thereof, can be c...
research
07/15/2020

Non-greedy Gradient-based Hyperparameter Optimization Over Long Horizons

Gradient-based hyperparameter optimization is an attractive way to perfo...
research
11/02/2021

Meta-Learning to Improve Pre-Training

Pre-training (PT) followed by fine-tuning (FT) is an effective method fo...
research
02/20/2023

Nystrom Method for Accurate and Scalable Implicit Differentiation

The essential difficulty of gradient-based bilevel optimization using im...
research
06/13/2018

Far-HO: A Bilevel Programming Package for Hyperparameter Optimization and Meta-Learning

In (Franceschi et al., 2018) we proposed a unified mathematical framewor...
research
10/13/2021

ES-Based Jacobian Enables Faster Bilevel Optimization

Bilevel optimization (BO) has arisen as a powerful tool for solving many...

Please sign up or login with your details

Forgot password? Click here to reset