There and back again: Cycle consistency across sets for isolating factors of variation

03/04/2021
by   Kieran A. Murphy, et al.
0

Representational learning hinges on the task of unraveling the set of underlying explanatory factors of variation in data. In this work, we operate in the setting where limited information is known about the data in the form of groupings, or set membership, where the underlying factors of variation is restricted to a subset. Our goal is to learn representations which isolate the factors of variation that are common across the groupings. Our key insight is the use of cycle consistency across sets(CCS) between the learned embeddings of images belonging to different sets. In contrast to other methods utilizing set supervision, CCS can be applied with significantly fewer constraints on the factors of variation, across a remarkably broad range of settings, and only utilizing set membership for some fraction of the training data. By curating datasets from Shapes3D, we quantify the effectiveness of CCS through mutual information between the learned representations and the known generative factors. In addition, we demonstrate the applicability of CCS to the tasks of digit style isolation and synthetic-to-real object pose transfer and compare to generative approaches utilizing the same supervision.

READ FULL TEXT

page 3

page 5

page 7

research
11/10/2016

Disentangling factors of variation in deep representations using adversarial training

We introduce a conditional generative model for learning to disentangle ...
research
04/27/2018

Disentangling Factors of Variation with Cycle-Consistent Variational Auto-Encoders

Generative models that learn disentangled representations for different ...
research
11/11/2020

Quantifying and Learning Disentangled Representations with Limited Supervision

Learning low-dimensional representations that disentangle the underlying...
research
07/17/2021

Visual Representation Learning Does Not Generalize Strongly Within the Same Domain

An important component for generalization in machine learning is to unco...
research
10/31/2018

Interventional Robustness of Deep Latent Variable Models

The ability to learn disentangled representations that split underlying ...
research
06/29/2021

An Image is Worth More Than a Thousand Words: Towards Disentanglement in the Wild

Unsupervised disentanglement has been shown to be theoretically impossib...
research
12/20/2014

Discovering Hidden Factors of Variation in Deep Networks

Deep learning has enjoyed a great deal of success because of its ability...

Please sign up or login with your details

Forgot password? Click here to reset