Training Stronger Baselines for Learning to Optimize

10/18/2020
by   Tianlong Chen, et al.
8

Learning to optimize (L2O) has gained increasing attention since classical optimizers require laborious problem-specific design and hyperparameter tuning. However, there is a gap between the practical demand and the achievable performance of existing L2O models. Specifically, those learned optimizers are applicable to only a limited class of problems, and often exhibit instability. With many efforts devoted to designing more sophisticated L2O models, we argue for another orthogonal, under-explored theme: the training techniques for those L2O models. We show that even the simplest L2O model could have been trained much better. We first present a progressive training scheme to gradually increase the optimizer unroll length, to mitigate a well-known L2O dilemma of truncation bias (shorter unrolling) versus gradient explosion (longer unrolling). We further leverage off-policy imitation learning to guide the L2O learning, by taking reference to the behavior of analytical optimizers. Our improved training techniques are plugged into a variety of state-of-the-art L2O models, and immediately boost their performance, without making any change to their model structures. Especially, by our proposed techniques, an earliest and simplest L2O model can be trained to outperform the latest complicated L2O models on a number of tasks. Our results demonstrate a greater potential of L2O yet to be unleashed, and urge to rethink the recent progress. Our codes are publicly available at: https://github.com/VITA-Group/L2O-Training-Techniques.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/12/2022

Optimizer Amalgamation

Selecting an appropriate optimizer for a given problem is of major inter...
research
02/22/2023

Learning to Generalize Provably in Learning to Optimize

Learning to optimize (L2O) has gained increasing popularity, which autom...
research
07/06/2020

Scaling Imitation Learning in Minecraft

Imitation learning is a powerful family of techniques for learning senso...
research
02/28/2023

M-L2O: Towards Generalizable Learning-to-Optimize by Test-Time Fast Self-Adaptation

Learning to Optimize (L2O) has drawn increasing attention as it often re...
research
07/05/2021

Training Adaptive Computation for Open-Domain Question Answering with Computational Constraints

Adaptive Computation (AC) has been shown to be effective in improving th...
research
08/16/2018

On the Decision Boundary of Deep Neural Networks

While deep learning models and techniques have achieved great empirical ...
research
05/30/2022

Prompt-aligned Gradient for Prompt Tuning

Thanks to the large pre-trained vision-language models (VLMs) like CLIP,...

Please sign up or login with your details

Forgot password? Click here to reset