Hallucination Improves the Performance of Unsupervised Visual Representation Learning

07/22/2023
by   Jing Wu, et al.
0

Contrastive learning models based on Siamese structure have demonstrated remarkable performance in self-supervised learning. Such a success of contrastive learning relies on two conditions, a sufficient number of positive pairs and adequate variations between them. If the conditions are not met, these frameworks will lack semantic contrast and be fragile on overfitting. To address these two issues, we propose Hallucinator that could efficiently generate additional positive samples for further contrast. The Hallucinator is differentiable and creates new data in the feature space. Thus, it is optimized directly with the pre-training task and introduces nearly negligible computation. Moreover, we reduce the mutual information of hallucinated pairs and smooth them through non-linear operations. This process helps avoid over-confident contrastive learning models during the training and achieves more transformation-invariant feature embeddings. Remarkably, we empirically prove that the proposed Hallucinator generalizes well to various contrastive learning models, including MoCoV1 V2, SimCLR and SimSiam. Under the linear classification protocol, a stable accuracy gain is achieved, ranging from 0.3 to 3.0 also observed in transferring pre-train encoders to the downstream tasks, including object detection and segmentation.

READ FULL TEXT

page 1

page 3

research
02/07/2022

Crafting Better Contrastive Views for Siamese Representation Learning

Recent self-supervised contrastive learning methods greatly benefit from...
research
03/14/2022

Rethinking Minimal Sufficient Representation in Contrastive Learning

Contrastive learning between different views of the data achieves outsta...
research
09/22/2022

An Information Minimization Based Contrastive Learning Model for Unsupervised Sentence Embeddings Learning

Unsupervised sentence embeddings learning has been recently dominated by...
research
03/13/2023

Molecular Property Prediction by Semantic-invariant Contrastive Learning

Contrastive learning have been widely used as pretext tasks for self-sup...
research
08/07/2023

Feature-Suppressed Contrast for Self-Supervised Food Pre-training

Most previous approaches for analyzing food images have relied on extens...
research
11/15/2022

Masked Reconstruction Contrastive Learning with Information Bottleneck Principle

Contrastive learning (CL) has shown great power in self-supervised learn...
research
08/06/2021

Improving Contrastive Learning by Visualizing Feature Transformation

Contrastive learning, which aims at minimizing the distance between posi...

Please sign up or login with your details

Forgot password? Click here to reset