Distortion-Disentangled Contrastive Learning

03/09/2023
by   Jinfeng Wang, et al.
0

Self-supervised learning is well known for its remarkable performance in representation learning and various downstream computer vision tasks. Recently, Positive-pair-Only Contrastive Learning (POCL) has achieved reliable performance without the need to construct positive-negative training sets. It reduces memory requirements by lessening the dependency on the batch size. The POCL method typically uses a single loss function to extract the distortion invariant representation (DIR) which describes the proximity of positive-pair representations affected by different distortions. This loss function implicitly enables the model to filter out or ignore the distortion variant representation (DVR) affected by different distortions. However, existing POCL methods do not explicitly enforce the disentanglement and exploitation of the actually valuable DVR. In addition, these POCL methods have been observed to be sensitive to augmentation strategies. To address these limitations, we propose a novel POCL framework named Distortion-Disentangled Contrastive Learning (DDCL) and a Distortion-Disentangled Loss (DDL). Our approach is the first to explicitly disentangle and exploit the DVR inside the model and feature stream to improve the overall representation utilization efficiency, robustness and representation ability. Experiments carried out demonstrate the superiority of our framework to Barlow Twins and Simsiam in terms of convergence, representation quality, and robustness on several benchmark datasets.

READ FULL TEXT
research
09/17/2020

AAG: Self-Supervised Representation Learning by Auxiliary Augmentation with GNT-Xent Loss

Self-supervised representation learning is an emerging research topic fo...
research
11/24/2022

Pose-disentangled Contrastive Learning for Self-supervised Facial Representation

Self-supervised facial representation has recently attracted increasing ...
research
09/16/2022

Adversarial Cross-View Disentangled Graph Contrastive Learning

Graph contrastive learning (GCL) is prevalent to tackle the supervision ...
research
07/04/2022

Positive-Negative Equal Contrastive Loss for Semantic Segmentation

The contextual information is critical for various computer vision tasks...
research
10/17/2022

Unifying Graph Contrastive Learning with Flexible Contextual Scopes

Graph contrastive learning (GCL) has recently emerged as an effective le...
research
08/01/2015

Towards Distortion-Predictable Embedding of Neural Networks

Current research in Computer Vision has shown that Convolutional Neural ...
research
10/04/2022

Contrastive Learning Can Find An Optimal Basis For Approximately View-Invariant Functions

Contrastive learning is a powerful framework for learning self-supervise...

Please sign up or login with your details

Forgot password? Click here to reset