DeepAI AI Chat
Log In Sign Up

There and back again: Cycle consistency across sets for isolating factors of variation

by   Kieran A. Murphy, et al.

Representational learning hinges on the task of unraveling the set of underlying explanatory factors of variation in data. In this work, we operate in the setting where limited information is known about the data in the form of groupings, or set membership, where the underlying factors of variation is restricted to a subset. Our goal is to learn representations which isolate the factors of variation that are common across the groupings. Our key insight is the use of cycle consistency across sets(CCS) between the learned embeddings of images belonging to different sets. In contrast to other methods utilizing set supervision, CCS can be applied with significantly fewer constraints on the factors of variation, across a remarkably broad range of settings, and only utilizing set membership for some fraction of the training data. By curating datasets from Shapes3D, we quantify the effectiveness of CCS through mutual information between the learned representations and the known generative factors. In addition, we demonstrate the applicability of CCS to the tasks of digit style isolation and synthetic-to-real object pose transfer and compare to generative approaches utilizing the same supervision.


page 3

page 5

page 7


Disentangling factors of variation in deep representations using adversarial training

We introduce a conditional generative model for learning to disentangle ...

Disentangling Factors of Variation with Cycle-Consistent Variational Auto-Encoders

Generative models that learn disentangled representations for different ...

Quantifying and Learning Disentangled Representations with Limited Supervision

Learning low-dimensional representations that disentangle the underlying...

Visual Representation Learning Does Not Generalize Strongly Within the Same Domain

An important component for generalization in machine learning is to unco...

Interventional Robustness of Deep Latent Variable Models

The ability to learn disentangled representations that split underlying ...

An Image is Worth More Than a Thousand Words: Towards Disentanglement in the Wild

Unsupervised disentanglement has been shown to be theoretically impossib...

Discovering Hidden Factors of Variation in Deep Networks

Deep learning has enjoyed a great deal of success because of its ability...