Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regularized Fine-Tuning

02/12/2021
by   Yifan Zhang, et al.
0

Contrastive self-supervised learning (CSL) leverages unlabeled data to train models that provide instance-discriminative visual representations uniformly scattered in the feature space. In deployment, the common practice is to directly fine-tune models with the cross-entropy loss, which however may not be an optimal strategy. Although cross-entropy tends to separate inter-class features, the resulted models still have limited capability of reducing intra-class feature scattering that inherits from pre-training, and thus may suffer unsatisfactory performance on downstream tasks. In this paper, we investigate whether applying contrastive learning to fine-tuning would bring further benefits, and analytically find that optimizing the supervised contrastive loss benefits both class-discriminative representation learning and model optimization during fine-tuning. Inspired by these findings, we propose Contrast-regularized tuning (Core-tuning), a novel approach for fine-tuning contrastive self-supervised visual models. Instead of simply adding the contrastive loss to the objective of fine-tuning, Core-tuning also generates hard sample pairs for more effective contrastive learning through a novel feature mixup strategy, as well as improves the generalizability of the model by smoothing the decision boundary via mixed samples. Extensive experiments on image classification and semantic segmentation verify the effectiveness of Core-tuning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/30/2022

Improving Fine-tuning of Self-supervised Models with Contrastive Initialization

Self-supervised learning (SSL) has achieved remarkable performance in pr...
research
05/29/2021

Modeling Discriminative Representations for Out-of-Domain Detection with Supervised Contrastive Learning

Detecting Out-of-Domain (OOD) or unknown intents from user queries is es...
research
05/27/2022

Contrastive Learning Rivals Masked Image Modeling in Fine-tuning via Feature Distillation

Masked image modeling (MIM) learns representations with remarkably good ...
research
04/29/2021

Hyperspherically Regularized Networks for BYOL Improves Feature Uniformity and Separability

Bootstrap Your Own Latent (BYOL) introduced an approach to self-supervis...
research
10/10/2021

Feature Imitating Networks

In this paper, we introduce a novel approach to neural learning: the Fea...
research
09/23/2022

Whodunit? Learning to Contrast for Authorship Attribution

Authorship attribution is the task of identifying the author of a given ...
research
12/14/2020

COAD: Contrastive Pre-training with Adversarial Fine-tuning for Zero-shot Expert Linking

Expert finding, a popular service provided by many online websites such ...

Please sign up or login with your details

Forgot password? Click here to reset