CIPER: Combining Invariant and Equivariant Representations Using Contrastive and Predictive Learning

02/05/2023
by   Xia Xu, et al.
0

Self-supervised representation learning (SSRL) methods have shown great success in computer vision. In recent studies, augmentation-based contrastive learning methods have been proposed for learning representations that are invariant or equivariant to pre-defined data augmentation operations. However, invariant or equivariant features favor only specific downstream tasks depending on the augmentations chosen. They may result in poor performance when a downstream task requires the counterpart of those features (e.g., when the task is to recognize hand-written digits while the model learns to be invariant to in-plane image rotations rendering it incapable of distinguishing "9" from "6"). This work introduces Contrastive Invariant and Predictive Equivariant Representation learning (CIPER). CIPER comprises both invariant and equivariant learning objectives using one shared encoder and two different output heads on top of the encoder. One output head is a projection head with a state-of-the-art contrastive objective to encourage invariance to augmentations. The other is a prediction head estimating the augmentation parameters, capturing equivariant features. Both heads are discarded after training and only the encoder is used for downstream tasks. We evaluate our method on static image tasks and time-augmented image datasets. Our results show that CIPER outperforms a baseline contrastive method on various tasks, especially when the downstream task requires the encoding of augmentation-related information.

READ FULL TEXT

page 7

page 8

research
08/13/2020

What Should Not Be Contrastive in Contrastive Learning

Recent self-supervised contrastive methods have been able to produce imp...
research
11/15/2021

Contrastive Representation Learning with Trainable Augmentation Channel

In contrastive representation learning, data representation is trained s...
research
06/07/2023

ScoreCL: Augmentation-Adaptive Contrastive Learning via Score-Matching Function

Self-supervised contrastive learning (CL) has achieved state-of-the-art ...
research
11/01/2022

Augmentation Invariant Manifold Learning

Data augmentation is a widely used technique and an essential ingredient...
research
02/22/2023

Contrastive Representation Learning for Acoustic Parameter Estimation

A study is presented in which a contrastive learning approach is used to...
research
05/21/2022

Improvements to Self-Supervised Representation Learning for Masked Image Modeling

This paper explores improvements to the masked image modeling (MIM) para...
research
06/21/2021

Lossy Compression for Lossless Prediction

Most data is automatically collected and only ever "seen" by algorithms....

Please sign up or login with your details

Forgot password? Click here to reset