Self-Supervised Learning For Few-Shot Image Classification

11/14/2019
by   Da Chen, et al.
1

Few-shot image classification aims to classify unseen classes with limited labeled samples. Recent works benefit from the meta-learning process with episodic tasks and can fast adapt to class from training to testing. Due to the limited number of samples for each task, the initial embedding network for meta learning becomes an essential component and can largely affects the performance in practice. To this end, many pre-trained methods have been proposed, and most of them are trained in supervised way with limited transfer ability for unseen classes. In this paper, we proposed to train a more generalized embedding network with self-supervised learning (SSL) which can provide slow and robust representation for downstream tasks by learning from the data itself. We evaluate our work by extensive comparisons with previous baseline methods on two few-shot classification datasets ( i.e., MiniImageNet and CUB). Based on the evaluation results, the proposed method achieves significantly better performance, i.e., improve 1-shot and 5-shot tasks by nearly 3% and 4% on MiniImageNet, by nearly 9% and 3% on CUB. Moreover, the proposed method can gain the improvement of (15%, 13%) on MiniImageNet and (15%, 8%) on CUB by pretraining using more unlabeled data. Our code will be available at [https://github.com/phecy/SSL-FEW-SHOT.]https://github.com/phecy/ssl-few-shot.

READ FULL TEXT
research
03/25/2020

Rethinking Few-Shot Image Classification: a Good Embedding Is All You Need?

The focus of recent meta-learning research has been on the development o...
research
12/18/2019

Class Regularization: Improve Few-shot Image Classification by Reducing Meta Shift

Few-shot image classification requires the classifier to robustly cope w...
research
03/20/2023

A Global Model Approach to Robust Few-Shot SAR Automatic Target Recognition

In real-world scenarios, it may not always be possible to collect hundre...
research
12/22/2021

Meta-Learning and Self-Supervised Pretraining for Real World Image Translation

Recent advances in deep learning, in particular enabled by hardware adva...
research
10/07/2022

SVL-Adapter: Self-Supervised Adapter for Vision-Language Pretrained Models

Vision-language models such as CLIP are pretrained on large volumes of i...
research
07/14/2022

Tree Structure-Aware Few-Shot Image Classification via Hierarchical Aggregation

In this paper, we mainly focus on the problem of how to learn additional...
research
11/13/2022

Enhancing Few-shot Image Classification with Cosine Transformer

This paper addresses the few-shot image classification problem. One nota...

Please sign up or login with your details

Forgot password? Click here to reset