Partitioning Image Representation in Contrastive Learning

03/20/2022
by   Hyunsub Lee, et al.
0

In contrastive learning in the image domain, the anchor and positive samples are forced to have as close representations as possible. However, forcing the two samples to have the same representation could be misleading because the data augmentation techniques make the two samples different. In this paper, we introduce a new representation, partitioned representation, which can learn both common and unique features of the anchor and positive samples in contrastive learning. The partitioned representation consists of two parts: the content part and the style part. The content part represents common features of the class, and the style part represents the own features of each sample, which can lead to the representation of the data augmentation method. We can achieve the partitioned representation simply by decomposing a loss function of contrastive learning into two terms on the two separate representations, respectively. To evaluate our representation with two parts, we take two framework models: Variational AutoEncoder (VAE) and BootstrapYour Own Latent(BYOL) to show the separability of content and style, and to confirm the generalization ability in classification, respectively. Based on the experiments, we show that our approach can separate two types of information in the VAE framework and outperforms the conventional BYOL in linear separability and a few-shot learning task as downstream tasks.

READ FULL TEXT

page 2

page 4

page 5

research
09/28/2018

Open-Ended Content-Style Recombination Via Leakage Filtering

We consider visual domains in which a class label specifies the content ...
research
06/01/2022

Rethinking the Augmentation Module in Contrastive Learning: Learning Hierarchical Augmentation Invariance with Expanded Views

A data augmentation module is utilized in contrastive learning to transf...
research
02/15/2023

InfoNCE Loss Provably Learns Cluster-Preserving Representations

The goal of contrasting learning is to learn a representation that prese...
research
07/26/2021

AAVAE: Augmentation-Augmented Variational Autoencoders

Recent methods for self-supervised learning can be grouped into two para...
research
04/22/2022

Universum-inspired Supervised Contrastive Learning

Mixup is an efficient data augmentation method which generates additiona...
research
06/03/2022

Rethinking Positive Sampling for Contrastive Learning with Kernel

Data augmentation is a crucial component in unsupervised contrastive lea...
research
09/18/2023

Contrastive Learning and Data Augmentation in Traffic Classification Using a Flowpic Input Representation

Over the last years we witnessed a renewed interest towards Traffic Clas...

Please sign up or login with your details

Forgot password? Click here to reset