MixSiam: A Mixture-based Approach to Self-supervised Representation Learning

11/04/2021
by   Xiaoyang Guo, et al.
0

Recently contrastive learning has shown significant progress in learning visual representations from unlabeled data. The core idea is training the backbone to be invariant to different augmentations of an instance. While most methods only maximize the feature similarity between two augmented data, we further generate more challenging training samples and force the model to keep predicting discriminative representation on these hard samples. In this paper, we propose MixSiam, a mixture-based approach upon the traditional siamese network. On the one hand, we input two augmented images of an instance to the backbone and obtain the discriminative representation by performing an element-wise maximum of two features. On the other hand, we take the mixture of these augmented images as input, and expect the model prediction to be close to the discriminative representation. In this way, the model could access more variant data samples of an instance and keep predicting invariant discriminative representations for them. Thus the learned model is more robust compared to previous contrastive learning methods. Extensive experiments on large-scale datasets show that MixSiam steadily improves the baseline and achieves competitive results with state-of-the-art methods. Our code will be released soon.

READ FULL TEXT

page 1

page 7

research
06/26/2022

Vision Transformer for Contrastive Clustering

Vision Transformer (ViT) has shown its advantages over the convolutional...
research
03/30/2021

ICE: Inter-instance Contrastive Encoding for Unsupervised Person Re-identification

Unsupervised person re-identification (ReID) aims at learning discrimina...
research
06/11/2021

Hybrid Generative-Contrastive Representation Learning

Unsupervised representation learning has recently received lots of inter...
research
12/04/2020

Hierarchical Semantic Aggregation for Contrastive Representation Learning

Self-supervised learning based on instance discrimination has shown rema...
research
12/01/2021

CLAWS: Contrastive Learning with hard Attention and Weak Supervision

Learning effective visual representations without human supervision is a...
research
02/14/2022

Learn by Challenging Yourself: Contrastive Visual Representation Learning with Hard Sample Generation

Contrastive learning (CL), a self-supervised learning approach, can effe...
research
02/14/2022

A Generic Self-Supervised Framework of Learning Invariant Discriminative Features

Self-supervised learning (SSL) has become a popular method for generatin...

Please sign up or login with your details

Forgot password? Click here to reset