DeepAI AI Chat
Log In Sign Up

On the Importance of Distractors for Few-Shot Classification

09/20/2021
by   Rajshekhar Das, et al.
University of Illinois at Urbana-Champaign
Carnegie Mellon University
0

Few-shot classification aims at classifying categories of a novel task by learning from just a few (typically, 1 to 5) labelled examples. An effective approach to few-shot classification involves a prior model trained on a large-sample base domain, which is then finetuned over the novel few-shot task to yield generalizable representations. However, task-specific finetuning is prone to overfitting due to the lack of enough training examples. To alleviate this issue, we propose a new finetuning approach based on contrastive learning that reuses unlabelled examples from the base domain in the form of distractors. Unlike the nature of unlabelled data used in prior works, distractors belong to classes that do not overlap with the novel categories. We demonstrate for the first time that inclusion of such distractors can significantly boost few-shot generalization. Our technical novelty includes a stochastic pairing of examples sharing the same category in the few-shot task and a weighting term that controls the relative influence of task-specific negatives and distractors. An important aspect of our finetuning objective is that it is agnostic to distractor labels and hence applicable to various base domain settings. Compared to state-of-the-art approaches, our method shows accuracy gains of up to 12% in cross-domain and up to 5% in unsupervised prior-learning settings.

READ FULL TEXT

page 1

page 12

03/12/2019

Dense Classification and Implanting for Few-Shot Learning

Training deep neural networks from few examples is a highly challenging ...
03/20/2020

Few-Shot Learning with Geometric Constraints

In this article, we consider the problem of few-shot learning for classi...
04/11/2022

Few-Shot Object Detection in Unseen Domains

Few-shot object detection (FSOD) has thrived in recent years to learn no...
12/29/2019

FLAT: Few-Shot Learning via Autoencoding Transformation Regularizers

One of the most significant challenges facing a few-shot learning task i...
11/28/2022

CoNAL: Anticipating Outliers with Large Language Models

In many task settings, text classification models are likely to encounte...
01/27/2020

Exploiting Unsupervised Inputs for Accurate Few-Shot Classification

In few-shot classification, the aim is to learn models able to discrimin...
06/10/2022

From Labels to Priors in Capsule Endoscopy: A Prior Guided Approach for Improving Generalization with Few Labels

The lack of generalizability of deep learning approaches for the automat...

Code Repositories