DeepAI AI Chat
Log In Sign Up

CIPER: Combining Invariant and Equivariant Representations Using Contrastive and Predictive Learning

by   Xia Xu, et al.
IG Farben Haus

Self-supervised representation learning (SSRL) methods have shown great success in computer vision. In recent studies, augmentation-based contrastive learning methods have been proposed for learning representations that are invariant or equivariant to pre-defined data augmentation operations. However, invariant or equivariant features favor only specific downstream tasks depending on the augmentations chosen. They may result in poor performance when a downstream task requires the counterpart of those features (e.g., when the task is to recognize hand-written digits while the model learns to be invariant to in-plane image rotations rendering it incapable of distinguishing "9" from "6"). This work introduces Contrastive Invariant and Predictive Equivariant Representation learning (CIPER). CIPER comprises both invariant and equivariant learning objectives using one shared encoder and two different output heads on top of the encoder. One output head is a projection head with a state-of-the-art contrastive objective to encourage invariance to augmentations. The other is a prediction head estimating the augmentation parameters, capturing equivariant features. Both heads are discarded after training and only the encoder is used for downstream tasks. We evaluate our method on static image tasks and time-augmented image datasets. Our results show that CIPER outperforms a baseline contrastive method on various tasks, especially when the downstream task requires the encoding of augmentation-related information.


page 7

page 8


What Should Not Be Contrastive in Contrastive Learning

Recent self-supervised contrastive methods have been able to produce imp...

Contrastive Representation Learning with Trainable Augmentation Channel

In contrastive representation learning, data representation is trained s...

Augmentation Invariant Manifold Learning

Data augmentation is a widely used technique and an essential ingredient...

The Influences of Color and Shape Features in Visual Contrastive Learning

In the field of visual representation learning, performance of contrasti...

Contrastive Representation Learning for Acoustic Parameter Estimation

A study is presented in which a contrastive learning approach is used to...

Improvements to Self-Supervised Representation Learning for Masked Image Modeling

This paper explores improvements to the masked image modeling (MIM) para...

Lossy Compression for Lossless Prediction

Most data is automatically collected and only ever "seen" by algorithms....