Robust Meta-Representation Learning via Global Label Inference and Classification

12/22/2022
by   Ruohan Wang, et al.
0

Few-shot learning (FSL) is a central problem in meta-learning, where learners must efficiently learn from few labeled examples. Within FSL, feature pre-training has recently become an increasingly popular strategy to significantly improve generalization performance. However, the contribution of pre-training is often overlooked and understudied, with limited theoretical understanding of its impact on meta-learning performance. Further, pre-training requires a consistent set of global labels shared across training tasks, which may be unavailable in practice. In this work, we address the above issues by first showing the connection between pre-training and meta-learning. We discuss why pre-training yields more robust meta-representation and connect the theoretical analysis to existing works and empirical results. Secondly, we introduce Meta Label Learning (MeLa), a novel meta-learning algorithm that learns task relations by inferring global labels across tasks. This allows us to exploit pre-training for FSL even when global labels are unavailable or ill-defined. Lastly, we introduce an augmented pre-training procedure that further improves the learned meta-representation. Empirically, MeLa outperforms existing methods across a diverse range of benchmarks, in particular under a more challenging setting where the number of training tasks is limited and labels are task-specific. We also provide extensive ablation study to highlight its key properties.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/09/2021

The Role of Global Labels in Few-Shot Classification and How to Infer Them

Few-shot learning (FSL) is a central problem in meta-learning, where lea...
research
04/12/2020

Pre-training Text Representations as Meta Learning

Pre-training text representations has recently been shown to significant...
research
02/04/2022

Distribution Embedding Networks for Meta-Learning with Heterogeneous Covariate Spaces

We propose Distribution Embedding Networks (DEN) for classification with...
research
06/14/2022

Distributed and Distribution-Robust Meta Reinforcement Learning (D2-RMRL) for Data Pre-storing and Routing in Cube Satellite Networks

In this paper, the problem of data pre-storing and routing in dynamic, r...
research
02/10/2019

Task2Vec: Task Embedding for Meta-Learning

We introduce a method to provide vectorial representations of visual cla...
research
06/15/2022

A Meta-Analysis of Distributionally-Robust Models

State-of-the-art image classifiers trained on massive datasets (such as ...
research
11/10/2019

Learning to Few-Shot Learn Across Diverse Natural Language Classification Tasks

Self-supervised pre-training of transformer models has shown enormous su...

Please sign up or login with your details

Forgot password? Click here to reset