DeepAI AI Chat
Log In Sign Up

Self-training for Few-shot Transfer Across Extreme Task Differences

by   Cheng Perng Phoo, et al.

All few-shot learning techniques must be pre-trained on a large, labeled "base dataset". In problem domains where such large labeled datasets are not available for pre-training (e.g., X-ray images), one must resort to pre-training in a different "source" problem domain (e.g., ImageNet), which can be very different from the desired target task. Traditional few-shot and transfer learning techniques fail in the presence of such extreme differences between the source and target tasks. In this paper, we present a simple and effective solution to tackle this extreme domain gap: self-training a source domain representation on unlabeled data from the target domain. We show that this improves one-shot performance on the target domain by 2.9 points on average on a challenging benchmark with multiple domains.


page 1

page 2

page 3

page 4


Understanding Cross-Domain Few-Shot Learning: An Experimental Study

Cross-domain few-shot learning has drawn increasing attention for handli...

ReFine: Re-randomization before Fine-tuning for Cross-domain Few-shot Learning

Cross-domain few-shot learning (CD-FSL), where there are few target samp...

Effect of large-scale pre-training on full and few-shot transfer learning for natural and medical images

Transfer learning aims to exploit pre-trained models for more efficient ...

Adapted Deep Embeddings: A Synthesis of Methods for k-Shot Inductive Transfer Learning

The focus in machine learning has branched beyond training classifiers o...

Autoencoder Based Sample Selection for Self-Taught Learning

Self-taught learning is a technique that uses a large number of unlabele...

Ranking Distance Calibration for Cross-Domain Few-Shot Learning

Recent progress in few-shot learning promotes a more realistic cross-dom...

CUPID: Adaptive Curation of Pre-training Data for Video-and-Language Representation Learning

This work concerns video-language pre-training and representation learni...