Rethinking Rotation in Self-Supervised Contrastive Learning: Adaptive Positive or Negative Data Augmentation

10/23/2022
by   Atsuyuki Miyai, et al.
0

Rotation is frequently listed as a candidate for data augmentation in contrastive learning but seldom provides satisfactory improvements. We argue that this is because the rotated image is always treated as either positive or negative. The semantics of an image can be rotation-invariant or rotation-variant, so whether the rotated image is treated as positive or negative should be determined based on the content of the image. Therefore, we propose a novel augmentation strategy, adaptive Positive or Negative Data Augmentation (PNDA), in which an original and its rotated image are a positive pair if they are semantically close and a negative pair if they are semantically different. To achieve PNDA, we first determine whether rotation is positive or negative on an image-by-image basis in an unsupervised way. Then, we apply PNDA to contrastive learning frameworks. Our experiments showed that PNDA improves the performance of contrastive learning. The code is available at < https://github.com/AtsuMiyai/rethinking_rotation>.

READ FULL TEXT

page 1

page 4

page 6

research
03/25/2022

Improving Contrastive Learning with Model Augmentation

The sequential recommendation aims at predicting the next items in user ...
research
02/09/2023

Detecting Contextomized Quotes in News Headlines by Contrastive Learning

Quotes are critical for establishing credibility in news articles. A dir...
research
05/04/2022

UCL-Dehaze: Towards Real-world Image Dehazing via Unsupervised Contrastive Learning

While the wisdom of training an image dehazing model on synthetic hazy d...
research
04/04/2023

PartMix: Regularization Strategy to Learn Part Discovery for Visible-Infrared Person Re-identification

Modern data augmentation using a mixture-based technique can regularize ...
research
08/06/2021

Improving Contrastive Learning by Visualizing Feature Transformation

Contrastive learning, which aims at minimizing the distance between posi...
research
09/01/2021

Multi-Sample based Contrastive Loss for Top-k Recommendation

The top-k recommendation is a fundamental task in recommendation systems...
research
03/27/2022

CaCo: Both Positive and Negative Samples are Directly Learnable via Cooperative-adversarial Contrastive Learning

As a representative self-supervised method, contrastive learning has ach...

Please sign up or login with your details

Forgot password? Click here to reset