Improving Fine-tuning of Self-supervised Models with Contrastive Initialization

07/30/2022
by   Haolin Pan, et al.
0

Self-supervised learning (SSL) has achieved remarkable performance in pretraining the models that can be further used in downstream tasks via fine-tuning. However, these self-supervised models may not capture meaningful semantic information since the images belonging to the same class are always regarded as negative pairs in the contrastive loss. Consequently, the images of the same class are often located far away from each other in learned feature space, which would inevitably hamper the fine-tuning process. To address this issue, we seek to provide a better initialization for the self-supervised models by enhancing the semantic information. To this end, we propose a Contrastive Initialization (COIN) method that breaks the standard fine-tuning pipeline by introducing an extra initialization stage before fine-tuning. Extensive experiments show that, with the enriched semantics, our COIN significantly outperforms existing methods without introducing extra training cost and sets new state-of-the-arts on multiple downstream tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/12/2021

Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regularized Fine-Tuning

Contrastive self-supervised learning (CSL) leverages unlabeled data to t...
research
12/14/2020

COAD: Contrastive Pre-training with Adversarial Fine-tuning for Zero-shot Expert Linking

Expert finding, a popular service provided by many online websites such ...
research
04/22/2023

Self-supervised Learning by View Synthesis

We present view-synthesis autoencoders (VSA) in this paper, which is a s...
research
07/20/2023

Revisiting Fine-Tuning Strategies for Self-supervised Medical Imaging Analysis

Despite the rapid progress in self-supervised learning (SSL), end-to-end...
research
11/02/2022

RegCLR: A Self-Supervised Framework for Tabular Representation Learning in the Wild

Recent advances in self-supervised learning (SSL) using large models to ...
research
11/29/2022

On the power of foundation models

With infinitely many high-quality data points, infinite computational po...
research
04/10/2022

DILEMMA: Self-Supervised Shape and Texture Learning with Transformers

There is a growing belief that deep neural networks with a shape bias ma...

Please sign up or login with your details

Forgot password? Click here to reset