Self-supervised Pre-training with Hard Examples Improves Visual Representations

12/25/2020
by   Chunyuan Li, et al.
0

Self-supervised pre-training (SSP) employs random image transformations to generate training data for visual representation learning. In this paper, we first present a modeling framework that unifies existing SSP methods as learning to predict pseudo-labels. Then, we propose new data augmentation methods of generating training examples whose pseudo-labels are harder to predict than those generated via random image transformations. Specifically, we use adversarial training and CutMix to create hard examples (HEXA) to be used as augmented views for MoCo-v2 and DeepCluster-v2, leading to two variants HEXA_MoCo and HEXA_DCluster, respectively. In our experiments, we pre-train models on ImageNet and evaluate them on multiple public benchmarks. Our evaluation shows that the two new algorithm variants outperform their original counterparts, and achieve new state-of-the-art on a wide range of tasks where limited task supervision is available for fine-tuning. These results verify that hard examples are instrumental in improving the generalization of the pre-trained models.

READ FULL TEXT
research
03/09/2023

Rethinking Self-Supervised Visual Representation Learning in Pre-training for 3D Human Pose and Shape Estimation

Recently, a few self-supervised representation learning (SSL) methods ha...
research
06/17/2021

An Evaluation of Self-Supervised Pre-Training for Skin-Lesion Analysis

Self-supervised pre-training appears as an advantageous alternative to s...
research
04/25/2023

LEMaRT: Label-Efficient Masked Region Transform for Image Harmonization

We present a simple yet effective self-supervised pre-training method fo...
research
03/10/2023

UNFUSED: UNsupervised Finetuning Using SElf supervised Distillation

In this paper, we introduce UnFuSeD, a novel approach to leverage self-s...
research
03/11/2022

Masked Visual Pre-training for Motor Control

This paper shows that self-supervised visual pre-training from real-worl...
research
01/30/2023

Advancing Radiograph Representation Learning with Masked Record Modeling

Modern studies in radiograph representation learning rely on either self...
research
09/30/2022

Where Should I Spend My FLOPS? Efficiency Evaluations of Visual Pre-training Methods

Self-supervised methods have achieved remarkable success in transfer lea...

Please sign up or login with your details

Forgot password? Click here to reset