Viewmaker Networks: Learning Views for Unsupervised Representation Learning

10/14/2020
by   Alex Tamkin, et al.
9

Many recent methods for unsupervised representation learning involve training models to be invariant to different "views," or transformed versions of an input. However, designing these views requires considerable human expertise and experimentation, hindering widespread adoption of unsupervised representation learning methods across domains and modalities. To address this, we propose viewmaker networks: generative models that learn to produce input-dependent views for contrastive learning. We train this network jointly with an encoder network to produce adversarial ℓ_p perturbations for an input, which yields challenging yet useful views without extensive human tuning. Our learned views, when applied to CIFAR-10, enable comparable transfer accuracy to the the well-studied augmentations used for the SimCLR model. Our views significantly outperforming baseline augmentations in speech (+9 sensor (+17 combined with handcrafted views to improve robustness to common image corruptions. Our method demonstrates that learned views are a promising way to reduce the amount of expertise and effort needed for unsupervised learning, potentially extending its benefits to a much wider set of domains.

READ FULL TEXT

page 1

page 5

page 16

page 17

page 18

page 19

page 20

page 21

research
12/31/2021

Representation Learning via Consistent Assignment of Views to Clusters

We introduce Consistent Assignment for Representation Learning (CARL), a...
research
02/07/2022

Crafting Better Contrastive Views for Siamese Representation Learning

Recent self-supervised contrastive learning methods greatly benefit from...
research
11/21/2018

Learning from Multiview Correlations in Open-Domain Videos

An increasing number of datasets contain multiple views, such as video, ...
research
08/17/2021

MVCNet: Multiview Contrastive Network for Unsupervised Representation Learning for 3D CT Lesions

Objective and Impact Statement. With the renaissance of deep learning, a...
research
07/16/2020

Autoregressive Unsupervised Image Segmentation

In this work, we propose a new unsupervised image segmentation approach ...
research
05/30/2019

Unsupervised pre-training helps to conserve views from input distribution

We investigate the effects of the unsupervised pre-training method under...
research
04/02/2023

Constructive Assimilation: Boosting Contrastive Learning Performance through View Generation Strategies

Transformations based on domain expertise (expert transformations), such...

Please sign up or login with your details

Forgot password? Click here to reset