Robust Contrastive Learning Using Negative Samples with Diminished Semantics

10/27/2021
by   Songwei Ge, et al.
6

Unsupervised learning has recently made exceptional progress because of the development of more effective contrastive learning methods. However, CNNs are prone to depend on low-level features that humans deem non-semantic. This dependency has been conjectured to induce a lack of robustness to image perturbations or domain shift. In this paper, we show that by generating carefully designed negative samples, contrastive learning can learn more robust representations with less dependence on such features. Contrastive learning utilizes positive pairs that preserve semantic information while perturbing superficial features in the training images. Similarly, we propose to generate negative samples in a reversed way, where only the superfluous instead of the semantic features are preserved. We develop two methods, texture-based and patch-based augmentations, to generate negative samples. These samples achieve better generalization, especially under out-of-domain settings. We also analyze our method and the generated texture-based samples, showing that texture features are indispensable in classifying particular ImageNet classes and especially finer classes. We also show that model bias favors texture and shape features differently under different test settings. Our code, trained models, and ImageNet-Texture dataset can be found at https://github.com/SongweiGe/Contrastive-Learning-with-Non-Semantic-Negatives.

READ FULL TEXT

page 2

page 9

page 13

page 14

page 16

page 18

page 20

research
03/09/2021

Doubly Contrastive Deep Clustering

Deep clustering successfully provides more effective features than conve...
research
11/23/2020

Boosting Contrastive Self-Supervised Learning with False Negative Cancellation

Self-supervised representation learning has witnessed significant leaps ...
research
12/16/2020

ISD: Self-Supervised Learning by Iterative Similarity Distillation

Recently, contrastive learning has achieved great results in self-superv...
research
09/04/2023

Memory augment is All You Need for image restoration

Image restoration is a low-level vision task, most CNN methods are desig...
research
09/02/2022

Contrastive Semantic-Guided Image Smoothing Network

Image smoothing is a fundamental low-level vision task that aims to pres...
research
03/29/2022

Contrasting the landscape of contrastive and non-contrastive learning

A lot of recent advances in unsupervised feature learning are based on d...
research
03/17/2022

Modulated Contrast for Versatile Image Synthesis

Perceiving the similarity between images has been a long-standing and fu...

Please sign up or login with your details

Forgot password? Click here to reset