Do Generative Models Know Disentanglement? Contrastive Learning is All You Need

02/21/2021
by   Xuanchi Ren, et al.
6

Disentangled generative models are typically trained with an extra regularization term, which encourages the traversal of each latent factor to make a distinct and independent change at the cost of generation quality. When traversing the latent space of generative models trained without the disentanglement term, the generated samples show semantically meaningful change, raising the question: do generative models know disentanglement? We propose an unsupervised and model-agnostic method: Disentanglement via Contrast (DisCo) in the Variation Space. DisCo consists of: (i) a Navigator providing traversal directions in the latent space, and (ii) a Δ-Contrastor composed of two shared-weight Encoders, which encode image pairs along these directions to disentangled representations respectively, and a difference operator to map the encoded representations to the Variation Space. We propose two more key techniques for DisCo: entropy-based domination loss to make the encoded representations more disentangled and the strategy of flipping hard negatives to address directions with the same semantic meaning. By optimizing the Navigator to discover disentangled directions in the latent space and Encoders to extract disentangled representations from images with Contrastive Learning, DisCo achieves the state-of-the-art disentanglement given pretrained non-disentangled generative models, including GAN, VAE, and Flow. Project page at https://github.com/xrenaa/DisCo.

READ FULL TEXT

page 5

page 6

page 8

research
07/14/2022

Comparing the latent space of generative models

Different encodings of datapoints in the latent space of latent-vector g...
research
04/05/2022

Lost in Latent Space: Disentangled Models and the Challenge of Combinatorial Generalisation

Recent research has shown that generative models with highly disentangle...
research
02/11/2021

Disentangled Representations from Non-Disentangled Models

Constructing disentangled representations is known to be a difficult tas...
research
08/14/2021

Unsupervised Disentanglement without Autoencoding: Pitfalls and Future Directions

Disentangled visual representations have largely been studied with gener...
research
12/04/2018

A Spectral Regularizer for Unsupervised Disentanglement

Generative models that learn to associate variations in the output along...
research
07/25/2020

Learning Disentangled Representations with Latent Variation Predictability

Latent traversal is a popular approach to visualize the disentangled lat...
research
03/19/2021

GLOWin: A Flow-based Invertible Generative Framework for Learning Disentangled Feature Representations in Medical Images

Disentangled representations can be useful in many downstream tasks, hel...

Please sign up or login with your details

Forgot password? Click here to reset