DeepAI
Log In Sign Up

Do Generative Models Know Disentanglement? Contrastive Learning is All You Need

02/21/2021
by   Xuanchi Ren, et al.
6

Disentangled generative models are typically trained with an extra regularization term, which encourages the traversal of each latent factor to make a distinct and independent change at the cost of generation quality. When traversing the latent space of generative models trained without the disentanglement term, the generated samples show semantically meaningful change, raising the question: do generative models know disentanglement? We propose an unsupervised and model-agnostic method: Disentanglement via Contrast (DisCo) in the Variation Space. DisCo consists of: (i) a Navigator providing traversal directions in the latent space, and (ii) a Δ-Contrastor composed of two shared-weight Encoders, which encode image pairs along these directions to disentangled representations respectively, and a difference operator to map the encoded representations to the Variation Space. We propose two more key techniques for DisCo: entropy-based domination loss to make the encoded representations more disentangled and the strategy of flipping hard negatives to address directions with the same semantic meaning. By optimizing the Navigator to discover disentangled directions in the latent space and Encoders to extract disentangled representations from images with Contrastive Learning, DisCo achieves the state-of-the-art disentanglement given pretrained non-disentangled generative models, including GAN, VAE, and Flow. Project page at https://github.com/xrenaa/DisCo.

READ FULL TEXT

page 5

page 6

page 8

07/14/2022

Comparing the latent space of generative models

Different encodings of datapoints in the latent space of latent-vector g...
04/05/2022

Lost in Latent Space: Disentangled Models and the Challenge of Combinatorial Generalisation

Recent research has shown that generative models with highly disentangle...
02/11/2021

Disentangled Representations from Non-Disentangled Models

Constructing disentangled representations is known to be a difficult tas...
08/14/2021

Unsupervised Disentanglement without Autoencoding: Pitfalls and Future Directions

Disentangled visual representations have largely been studied with gener...
12/04/2018

A Spectral Regularizer for Unsupervised Disentanglement

Generative models that learn to associate variations in the output along...
07/25/2020

Learning Disentangled Representations with Latent Variation Predictability

Latent traversal is a popular approach to visualize the disentangled lat...
02/13/2022

Unsupervised Disentanglement with Tensor Product Representations on the Torus

The current methods for learning representations with auto-encoders almo...

Code Repositories

DisCo

Code for "Do Generative Models Know Disentanglement? Contrastive Learning is All You Need"


view repo

Safety-Aware-Motion-Prediction

[ICCV2021] Safety-aware Motion Prediction with Unseen Vehicles for Autonomous Driving


view repo