M-L2O: Towards Generalizable Learning-to-Optimize by Test-Time Fast Self-Adaptation

02/28/2023
by   Junjie Yang, et al.
0

Learning to Optimize (L2O) has drawn increasing attention as it often remarkably accelerates the optimization procedure of complex tasks by “overfitting" specific task type, leading to enhanced performance compared to analytical optimizers. Generally, L2O develops a parameterized optimization method (i.e., “optimizer") by learning from solving sample problems. This data-driven procedure yields L2O that can efficiently solve problems similar to those seen in training, that is, drawn from the same “task distribution". However, such learned optimizers often struggle when new test problems come with a substantially deviation from the training task distribution. This paper investigates a potential solution to this open challenge, by meta-training an L2O optimizer that can perform fast test-time self-adaptation to an out-of-distribution task, in only a few steps. We theoretically characterize the generalization of L2O, and further show that our proposed framework (termed as M-L2O) provably facilitates rapid task adaptation by locating well-adapted initial points for the optimizer weight. Empirical observations on several classic tasks like LASSO and Quadratic, demonstrate that M-L2O converges significantly faster than vanilla L2O with only 5 steps of adaptation, echoing our theoretical results. Codes are available in https://github.com/VITA-Group/M-L2O.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/22/2023

Learning to Generalize Provably in Learning to Optimize

Learning to optimize (L2O) has gained increasing popularity, which autom...
research
10/08/2018

CAML: Fast Context Adaptation via Meta-Learning

We propose CAML, a meta-learning method for fast adaptation that partiti...
research
03/23/2021

Learning to Optimize: A Primer and A Benchmark

Learning to optimize (L2O) is an emerging approach that leverages machin...
research
01/12/2023

1st Place Solution for ECCV 2022 OOD-CV Challenge Image Classification Track

OOD-CV challenge is an out-of-distribution generalization task. In this ...
research
10/18/2020

Training Stronger Baselines for Learning to Optimize

Learning to optimize (L2O) has gained increasing attention since classic...
research
03/12/2022

Optimizer Amalgamation

Selecting an appropriate optimizer for a given problem is of major inter...
research
10/22/2021

Learning Proposals for Practical Energy-Based Regression

Energy-based models (EBMs) have experienced a resurgence within machine ...

Please sign up or login with your details

Forgot password? Click here to reset