Meta-Learning with Task-Adaptive Loss Function for Few-Shot Learning

10/08/2021
by   Sungyong Baik, et al.
0

In few-shot learning scenarios, the challenge is to generalize and perform well on new unseen examples when only very few labeled examples are available for each task. Model-agnostic meta-learning (MAML) has gained the popularity as one of the representative few-shot learning methods for its flexibility and applicability to diverse problems. However, MAML and its variants often resort to a simple loss function without any auxiliary loss function or regularization terms that can help achieve better generalization. The problem lies in that each application and task may require different auxiliary loss function, especially when tasks are diverse and distinct. Instead of attempting to hand-design an auxiliary loss function for each application and task, we introduce a new meta-learning framework with a loss function that adapts to each task. Our proposed framework, named Meta-Learning with Task-Adaptive Loss Function (MeTAL), demonstrates the effectiveness and the flexibility across various domains, such as few-shot classification and few-shot regression.

READ FULL TEXT
research
11/29/2019

VIABLE: Fast Adaptation via Backpropagating Learned Loss

In few-shot learning, typically, the loss function which is applied at t...
research
09/11/2019

Few-Shot Classification on Unseen Domains by Learning Disparate Modulators

Although few-shot learning studies have advanced rapidly with the help o...
research
10/25/2021

Multi-Task Meta-Learning Modification with Stochastic Approximation

Meta-learning methods aim to build learning algorithms capable of quickl...
research
06/12/2019

Meta-Learning via Learned Loss

We present a meta-learning approach based on learning an adaptive, high-...
research
04/07/2022

Interval Bound Propagationx2013aided Fewx002dshot Learning

Few-shot learning aims to transfer the knowledge acquired from training ...
research
09/22/2020

Tailoring: encoding inductive biases by optimizing unsupervised objectives at prediction time

From CNNs to attention mechanisms, encoding inductive biases into neural...

Please sign up or login with your details

Forgot password? Click here to reset