CaCo: Both Positive and Negative Samples are Directly Learnable via Cooperative-adversarial Contrastive Learning

03/27/2022
by   Xiao Wang, et al.
0

As a representative self-supervised method, contrastive learning has achieved great successes in unsupervised training of representations. It trains an encoder by distinguishing positive samples from negative ones given query anchors. These positive and negative samples play critical roles in defining the objective to learn the discriminative encoder, avoiding it from learning trivial features. While existing methods heuristically choose these samples, we present a principled method where both positive and negative samples are directly learnable end-to-end with the encoder. We show that the positive and negative samples can be cooperatively and adversarially learned by minimizing and maximizing the contrastive loss, respectively. This yields cooperative positives and adversarial negatives with respect to the encoder, which are updated to continuously track the learned representation of the query anchors over mini-batches. The proposed method achieves 71.3 accuracy respectively over 200 and 800 epochs of pre-training ResNet-50 backbone on ImageNet1K without tricks such as multi-crop or stronger augmentations. With Multi-Crop, it can be further boosted into 75.7 source code and pre-trained model are released in https://github.com/maple-research-lab/caco.

READ FULL TEXT

page 1

page 6

page 10

research
10/27/2022

Robust Data2vec: Noise-robust Speech Representation Learning for ASR by Combining Regression and Improved Contrastive Learning

Self-supervised pre-training methods based on contrastive learning or re...
research
10/23/2022

Rethinking Rotation in Self-Supervised Contrastive Learning: Adaptive Positive or Negative Data Augmentation

Rotation is frequently listed as a candidate for data augmentation in co...
research
05/04/2022

UCL-Dehaze: Towards Real-world Image Dehazing via Unsupervised Contrastive Learning

While the wisdom of training an image dehazing model on synthetic hazy d...
research
07/08/2023

End-to-End Supervised Multilabel Contrastive Learning

Multilabel representation learning is recognized as a challenging proble...
research
08/06/2021

Improving Contrastive Learning by Visualizing Feature Transformation

Contrastive learning, which aims at minimizing the distance between posi...
research
07/26/2023

Exploring the Interactions between Target Positive and Negative Information for Acoustic Echo Cancellation

Acoustic echo cancellation (AEC) aims to remove interference signals whi...
research
04/21/2023

Learn What NOT to Learn: Towards Generative Safety in Chatbots

Conversational models that are generative and open-domain are particular...

Please sign up or login with your details

Forgot password? Click here to reset