Low-shot learning with large-scale diffusion

06/07/2017
by   Matthijs Douze, et al.
0

This paper considers the problem of inferring image labels for which only a few labelled examples are available at training time. This setup is often referred to as low-shot learning in the literature, where a standard approach is to re-train the last few layers of a convolutional neural network learned on separate classes. We consider a semi-supervised setting in which we exploit a large collection of images to support label propagation. This is made possible by leveraging the recent advances on large-scale similarity graph construction. We show that despite its conceptual simplicity, scaling up label propagation to up hundred millions of images leads to state of the art accuracy in the low-shot learning regime.

READ FULL TEXT

page 1

page 2

page 3

page 4

03/31/2020

DPGN: Distribution Propagation Graph Network for Few-shot Learning

We extend this idea further to explicitly model the distribution-level r...
01/27/2020

Exploiting Unsupervised Inputs for Accurate Few-Shot Classification

In few-shot classification, the aim is to learn models able to discrimin...
06/01/2017

Discriminative k-shot learning using probabilistic models

This paper introduces a probabilistic framework for k-shot image classif...
11/10/2017

Few-Shot Learning with Graph Neural Networks

We propose to study the problem of few-shot learning with the prism of i...
06/13/2016

Matching Networks for One Shot Learning

Learning from a few examples remains a key challenge in machine learning...
07/09/2020

Wandering Within a World: Online Contextualized Few-Shot Learning

We aim to bridge the gap between typical human and machine-learning envi...
01/05/2021

Local Propagation for Few-Shot Learning

The challenge in few-shot learning is that available data is not enough ...