TADAM: Task dependent adaptive metric for improved few-shot learning

05/23/2018
by   Boris N. Oreshkin, et al.
0

Few-shot learning has become essential for producing models that generalize from few examples. In this work, we identify that metric scaling and metric task conditioning are important to improve the performance of few-shot algorithms. Our analysis reveals that simple metric scaling completely changes the nature of few-shot algorithm parameter updates. Metric scaling provides improvements up to 14 5-way 5-shot classification task. We further propose a simple and effective way of conditioning a learner on the task sample set, resulting in learning a task-dependent metric space. Moreover, we propose and empirically test a practical end-to-end optimization procedure based on auxiliary task co-training to learn a task-dependent metric space. The resulting few-shot learning model based on the task-dependent scaled metric achieves state of the art on mini-Imagenet. We confirm these results on another few-shot dataset that we introduce in this paper based on CIFAR100.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/26/2019

Variational Metric Scaling for Metric-Based Meta-Learning

Metric-based meta-learning has attracted a lot of attention due to its e...
research
12/02/2020

ReMP: Rectified Metric Propagation for Few-Shot Learning

Few-shot learning features the capability of generalizing from a few exa...
research
12/03/2021

Adaptive Poincaré Point to Set Distance for Few-Shot Classification

Learning and generalizing from limited examples, i,e, few-shot learning,...
research
12/07/2021

Learning Instance and Task-Aware Dynamic Kernels for Few Shot Learning

Learning and generalizing to novel concepts with few samples (Few-Shot L...
research
04/27/2023

Analogy-Forming Transformers for Few-Shot 3D Parsing

We present Analogical Networks, a model that encodes domain knowledge ex...
research
06/08/2022

Metric Based Few-Shot Graph Classification

Many modern deep-learning techniques do not work without enormous datase...
research
03/02/2017

Attentive Recurrent Comparators

Rapid learning requires flexible representations to quickly adopt to new...

Please sign up or login with your details

Forgot password? Click here to reset