Unsupervised Meta-Learning For Few-Shot Image and Video Classification

11/28/2018
by   Siavash Khodadadeh, et al.
32

Few-shot or one-shot learning of classifiers for images or videos is an important next frontier in computer vision. The extreme paucity of training data means that the learning must start with a significant inductive bias towards the type of task to be learned. One way to acquire this is by meta-learning on tasks similar to the target task. However, if the meta-learning phase requires labeled data for a large number of tasks closely related to the target task, it not only increases the difficulty and cost, but also conceptually limits the approach to variations of well-understood domains. In this paper, we propose UMTRA, an algorithm that performs meta-learning on an unlabeled dataset in an unsupervised fashion, without putting any constraint on the classifier network architecture. The only requirements towards the dataset are: sufficient size, diversity and number of classes, and relevance of the domain to the one in the target task. Exploiting this information, UMTRA generates synthetic training tasks for the meta-learning phase. We evaluate UMTRA on few-shot and one-shot learning on both image and video domains. To the best of our knowledge, we are the first to evaluate meta-learning approaches on UCF-101. On the Omniglot and Mini-Imagenet few-shot learning benchmarks, UMTRA outperforms every tested approach based on unsupervised learning of representations, while alternating for the best performance with the recent CACTUs algorithm. Compared to supervised model-agnostic meta-learning approaches, UMTRA trades off some classification accuracy for a vast decrease in the number of labeled data needed. For instance, on the five-way one-shot classification on the Omniglot, we retain 85 algorithm, while reducing the number of required labels from 24005 to 5.

READ FULL TEXT

page 8

page 10

page 11

research
04/29/2020

Learning to Learn to Disambiguate: Meta-Learning for Few-Shot Word Sense Disambiguation

Deep learning methods typically rely on large amounts of annotated data ...
research
11/30/2020

Revisiting Unsupervised Meta-Learning: Amplifying or Compensating for the Characteristics of Few-Shot Tasks

Meta-learning becomes a practical approach towards few-shot image classi...
research
02/26/2019

Assume, Augment and Learn: Unsupervised Few-Shot Meta-Learning via Random Labels and Data Augmentation

The field of few-shot learning has been laboriously explored in the supe...
research
10/04/2018

Unsupervised Learning via Meta-Learning

A central goal of unsupervised learning is to acquire representations fr...
research
06/30/2021

How to Train Your MAML to Excel in Few-Shot Classification

Model-agnostic meta-learning (MAML) is arguably the most popular meta-le...
research
05/25/2021

Few-Shot Learning with Part Discovery and Augmentation from Unlabeled Images

Few-shot learning is a challenging task since only few instances are giv...
research
06/18/2020

Unsupervised Meta-Learning through Latent-Space Interpolation in Generative Models

Unsupervised meta-learning approaches rely on synthetic meta-tasks that ...

Please sign up or login with your details

Forgot password? Click here to reset