Similarity of Classification Tasks

01/27/2021
by   Cuong Nguyen, et al.
0

Recent advances in meta-learning has led to remarkable performances on several few-shot learning benchmarks. However, such success often ignores the similarity between training and testing tasks, resulting in a potential bias evaluation. We, therefore, propose a generative approach based on a variant of Latent Dirichlet Allocation to analyse task similarity to optimise and better understand the performance of meta-learning. We demonstrate that the proposed method can provide an insightful evaluation for meta-learning algorithms on two few-shot classification benchmarks that matches common intuition: the more similar the higher performance. Based on this similarity measure, we propose a task-selection strategy for meta-learning and show that it can produce more accurate classification results than methods that randomly select training tasks.

READ FULL TEXT

page 21

page 27

page 28

research
10/05/2020

Putting Theory to Work: From Learning Bounds to Meta-Learning Algorithms

Most of existing deep learning models rely on excessive amounts of label...
research
01/04/2023

Task Weighting in Meta-learning with Trajectory Optimisation

Developing meta-learning algorithms that are un-biased toward a subset o...
research
10/12/2022

The Devil is in the Details: On Models and Training Regimes for Few-Shot Intent Classification

Few-shot Intent Classification (FSIC) is one of the key challenges in mo...
research
02/23/2021

Lessons from Chasing Few-Shot Learning Benchmarks: Rethinking the Evaluation of Meta-Learning Methods

In this work we introduce a simple baseline for meta-learning. Our uncon...
research
07/17/2020

Adaptive Task Sampling for Meta-Learning

Meta-learning methods have been extensively studied and applied in compu...
research
07/11/2020

Coarse-to-Fine Pseudo-Labeling Guided Meta-Learning for Few-Shot Classification

To endow neural networks with the potential to learn rapidly from a hand...
research
02/16/2023

Learning to Initialize: Can Meta Learning Improve Cross-task Generalization in Prompt Tuning?

Prompt tuning (PT) which only tunes the embeddings of an additional sequ...

Please sign up or login with your details

Forgot password? Click here to reset