Improving Transformation Invariance in Contrastive Representation Learning

10/19/2020
by   Adam Foster, et al.
0

We propose methods to strengthen the invariance properties of representations obtained by contrastive learning. While existing approaches implicitly induce a degree of invariance as representations are learned, we look to more directly enforce invariance in the encoding process. To this end, we first introduce a training objective for contrastive learning that uses a novel regularizer to control how the representation changes under transformation. We show that representations trained with this objective perform better on downstream tasks and are more robust to the introduction of nuisance transformations at test time. Second, we propose a change to how test time representations are generated by introducing a feature averaging approach that combines encodings from multiple transformations of the original input, finding that this leads to across the board performance gains. Finally, we introduce the novel Spirograph dataset to explore our ideas in the context of a differentiable generative process with multiple downstream tasks, showing that our techniques for learning invariance are highly beneficial.

READ FULL TEXT

page 6

page 15

page 20

research
02/24/2023

Amortised Invariance Learning for Contrastive Self-Supervision

Contrastive self-supervised learning methods famously produce high quali...
research
08/22/2017

Representation Learning by Learning to Count

We introduce a novel method for representation learning that uses an art...
research
09/29/2022

Towards General-Purpose Representation Learning of Polygonal Geometries

Neural network representation learning for spatial data is a common need...
research
07/17/2022

HyperInvariances: Amortizing Invariance Learning

Providing invariances in a given learning task conveys a key inductive b...
research
07/27/2022

Optimizing transformations for contrastive learning in a differentiable framework

Current contrastive learning methods use random transformations sampled ...
research
06/28/2023

DUET: 2D Structured and Approximately Equivariant Representations

Multiview Self-Supervised Learning (MSSL) is based on learning invarianc...
research
02/19/2022

Transformation Coding: Simple Objectives for Equivariant Representations

We present a simple non-generative approach to deep representation learn...

Please sign up or login with your details

Forgot password? Click here to reset