Learned Fine-Tuner for Incongruous Few-Shot Learning

09/29/2020
by   Pu Zhao, et al.
0

Model-agnostic meta-learning (MAML) effectively meta-learns an initialization of model parameters for few-shot learning where all learning problems share the same format of model parameters – congruous meta-learning. We extend MAML to incongruous meta-learning where different yet related few-shot learning problems may not share any model parameters. In this setup, we propose the use of a Learned Fine Tuner (LFT) to replace hand-designed optimizers (such as SGD) for the task-specific fine-tuning. The meta-learned initialization in MAML is replaced by learned optimizers based on the learning-to-optimize (L2O) framework to meta-learn across incongruous tasks such that models fine-tuned with LFT (even from random initializations) adapt quickly to new tasks. The introduction of LFT within MAML (i) offers the capability to tackle few-shot learning tasks by meta-learning across incongruous yet related problems (e.g., classification over images of different sizes and model architectures), and (ii) can efficiently work with first-order and derivative-free few-shot learning problems. Theoretically, we quantify the difference between LFT (for MAML) and L2O. Empirically, we demonstrate the effectiveness of LFT through both synthetic and real problems and a novel application of generating universal adversarial attacks across different image sources in the few-shot learning regime.

READ FULL TEXT
research
02/20/2021

On Fast Adversarial Robustness Adaptation in Model-Agnostic Meta-Learning

Model-agnostic meta-learning (MAML) has emerged as one of the most succe...
research
05/17/2019

Alpha MAML: Adaptive Model-Agnostic Meta-Learning

Model-agnostic meta-learning (MAML) is a meta-learning technique to trai...
research
06/09/2021

Attentional meta-learners are polythetic classifiers

Polythetic classifications, based on shared patterns of features that ne...
research
09/06/2019

Efficient Automatic Meta Optimization Search for Few-Shot Learning

Previous works on meta-learning either relied on elaborately hand-design...
research
10/30/2019

Multimodal Model-Agnostic Meta-Learning via Task-Aware Modulation

Model-agnostic meta-learners aim to acquire meta-learned parameters from...
research
04/26/2022

Meta-free few-shot learning via representation learning with weight averaging

Recent studies on few-shot classification using transfer learning pose c...
research
11/20/2020

One Shot Learning for Speech Separation

Despite the recent success of speech separation models, they fail to sep...

Please sign up or login with your details

Forgot password? Click here to reset