Adaptation-Agnostic Meta-Training

08/24/2021
by   Jiaxin Chen, et al.
0

Many meta-learning algorithms can be formulated into an interleaved process, in the sense that task-specific predictors are learned during inner-task adaptation and meta-parameters are updated during meta-update. The normal meta-training strategy needs to differentiate through the inner-task adaptation procedure to optimize the meta-parameters. This leads to a constraint that the inner-task algorithms should be solved analytically. Under this constraint, only simple algorithms with analytical solutions can be applied as the inner-task algorithms, limiting the model expressiveness. To lift the limitation, we propose an adaptation-agnostic meta-training strategy. Following our proposed strategy, we can apply stronger algorithms (e.g., an ensemble of different types of algorithms) as the inner-task algorithm to achieve superior performance comparing with popular baselines. The source code is available at https://github.com/jiaxinchen666/AdaptationAgnosticMetaLearning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/07/2021

Meta-Learning with Neural Tangent Kernels

Model Agnostic Meta-Learning (MAML) has emerged as a standard framework ...
research
10/17/2020

MESA: Boost Ensemble Imbalanced Learning with MEta-SAmpler

Imbalanced learning (IL), i.e., learning unbiased models from class-imba...
research
03/26/2020

On-the-Fly Adaptation of Source Code Models using Meta-Learning

The ability to adapt to unseen, local contexts is an important challenge...
research
06/15/2020

Inner Ensemble Nets

We introduce Inner Ensemble Networks (IENs) which reduce the variance wi...
research
04/04/2023

Meta-Learning with a Geometry-Adaptive Preconditioner

Model-agnostic meta-learning (MAML) is one of the most successful meta-l...
research
03/18/2022

Negative Inner-Loop Learning Rates Learn Universal Features

Model Agnostic Meta-Learning (MAML) consists of two optimization loops: ...
research
06/05/2023

Meta-SAGE: Scale Meta-Learning Scheduled Adaptation with Guided Exploration for Mitigating Scale Shift on Combinatorial Optimization

This paper proposes Meta-SAGE, a novel approach for improving the scalab...

Please sign up or login with your details

Forgot password? Click here to reset