Low-shot learning with large-scale diffusion

by   Matthijs Douze, et al.

This paper considers the problem of inferring image labels for which only a few labelled examples are available at training time. This setup is often referred to as low-shot learning in the literature, where a standard approach is to re-train the last few layers of a convolutional neural network learned on separate classes. We consider a semi-supervised setting in which we exploit a large collection of images to support label propagation. This is made possible by leveraging the recent advances on large-scale similarity graph construction. We show that despite its conceptual simplicity, scaling up label propagation to up hundred millions of images leads to state of the art accuracy in the low-shot learning regime.


page 1

page 2

page 3

page 4


DPGN: Distribution Propagation Graph Network for Few-shot Learning

We extend this idea further to explicitly model the distribution-level r...

Exploiting Unsupervised Inputs for Accurate Few-Shot Classification

In few-shot classification, the aim is to learn models able to discrimin...

Discriminative k-shot learning using probabilistic models

This paper introduces a probabilistic framework for k-shot image classif...

Few-Shot Learning with Graph Neural Networks

We propose to study the problem of few-shot learning with the prism of i...

Matching Networks for One Shot Learning

Learning from a few examples remains a key challenge in machine learning...

Wandering Within a World: Online Contextualized Few-Shot Learning

We aim to bridge the gap between typical human and machine-learning envi...

Local Propagation for Few-Shot Learning

The challenge in few-shot learning is that available data is not enough ...