Log In Sign Up

Do Generative Models Know Disentanglement? Contrastive Learning is All You Need

by   Xuanchi Ren, et al.

Disentangled generative models are typically trained with an extra regularization term, which encourages the traversal of each latent factor to make a distinct and independent change at the cost of generation quality. When traversing the latent space of generative models trained without the disentanglement term, the generated samples show semantically meaningful change, raising the question: do generative models know disentanglement? We propose an unsupervised and model-agnostic method: Disentanglement via Contrast (DisCo) in the Variation Space. DisCo consists of: (i) a Navigator providing traversal directions in the latent space, and (ii) a Δ-Contrastor composed of two shared-weight Encoders, which encode image pairs along these directions to disentangled representations respectively, and a difference operator to map the encoded representations to the Variation Space. We propose two more key techniques for DisCo: entropy-based domination loss to make the encoded representations more disentangled and the strategy of flipping hard negatives to address directions with the same semantic meaning. By optimizing the Navigator to discover disentangled directions in the latent space and Encoders to extract disentangled representations from images with Contrastive Learning, DisCo achieves the state-of-the-art disentanglement given pretrained non-disentangled generative models, including GAN, VAE, and Flow. Project page at


page 5

page 6

page 8


Comparing the latent space of generative models

Different encodings of datapoints in the latent space of latent-vector g...

Lost in Latent Space: Disentangled Models and the Challenge of Combinatorial Generalisation

Recent research has shown that generative models with highly disentangle...

Disentangled Representations from Non-Disentangled Models

Constructing disentangled representations is known to be a difficult tas...

Unsupervised Disentanglement without Autoencoding: Pitfalls and Future Directions

Disentangled visual representations have largely been studied with gener...

A Spectral Regularizer for Unsupervised Disentanglement

Generative models that learn to associate variations in the output along...

Learning Disentangled Representations with Latent Variation Predictability

Latent traversal is a popular approach to visualize the disentangled lat...

Unsupervised Disentanglement with Tensor Product Representations on the Torus

The current methods for learning representations with auto-encoders almo...

Code Repositories


Code for "Do Generative Models Know Disentanglement? Contrastive Learning is All You Need"

view repo


[ICCV2021] Safety-aware Motion Prediction with Unseen Vehicles for Autonomous Driving

view repo