Few-Shot Classification with Contrastive Learning

09/17/2022
by   Zhanyuan Yang, et al.
0

A two-stage training paradigm consisting of sequential pre-training and meta-training stages has been widely used in current few-shot learning (FSL) research. Many of these methods use self-supervised learning and contrastive learning to achieve new state-of-the-art results. However, the potential of contrastive learning in both stages of FSL training paradigm is still not fully exploited. In this paper, we propose a novel contrastive learning-based framework that seamlessly integrates contrastive learning into both stages to improve the performance of few-shot classification. In the pre-training stage, we propose a self-supervised contrastive loss in the forms of feature vector vs. feature map and feature map vs. feature map, which uses global and local information to learn good initial representations. In the meta-training stage, we propose a cross-view episodic training mechanism to perform the nearest centroid classification on two different views of the same episode and adopt a distance-scaled contrastive loss based on them. These two strategies force the model to overcome the bias between views and promote the transferability of representations. Extensive experiments on three benchmark datasets demonstrate that our method achieves competitive results.

READ FULL TEXT
research
08/23/2020

Few-Shot Image Classification via Contrastive Self-Supervised Learning

Most previous few-shot learning algorithms are based on meta-training wi...
research
06/21/2021

Trainable Class Prototypes for Few-Shot Learning

Metric learning is a widely used method for few shot learning in which t...
research
08/19/2021

Self-Supervised Video Representation Learning with Meta-Contrastive Network

Self-supervised learning has been successfully applied to pre-train vide...
research
01/28/2023

Unbiased and Efficient Self-Supervised Incremental Contrastive Learning

Contrastive Learning (CL) has been proved to be a powerful self-supervis...
research
06/27/2020

PCLNet: A Practical Way for Unsupervised Deep PolSAR Representations and Few-Shot Classification

Deep learning and convolutional neural networks (CNNs) have made progres...
research
10/17/2022

HCL-TAT: A Hybrid Contrastive Learning Method for Few-shot Event Detection with Task-Adaptive Threshold

Conventional event detection models under supervised learning settings s...
research
09/14/2022

SPACE-2: Tree-Structured Semi-Supervised Contrastive Pre-training for Task-Oriented Dialog Understanding

Pre-training methods with contrastive learning objectives have shown rem...

Please sign up or login with your details

Forgot password? Click here to reset