Lost in Latent Space: Disentangled Models and the Challenge of Combinatorial Generalisation

04/05/2022
by   Milton L. Montero, et al.
0

Recent research has shown that generative models with highly disentangled representations fail to generalise to unseen combination of generative factor values. These findings contradict earlier research which showed improved performance in out-of-training distribution settings when compared to entangled representations. Additionally, it is not clear if the reported failures are due to (a) encoders failing to map novel combinations to the proper regions of the latent space or (b) novel combinations being mapped correctly but the decoder/downstream process is unable to render the correct output for the unseen combinations. We investigate these alternatives by testing several models on a range of datasets and training settings. We find that (i) when models fail, their encoders also fail to map unseen combinations to correct regions of the latent space and (ii) when models succeed, it is either because the test conditions do not exclude enough examples, or because excluded generative factors determine independent parts of the output image. Based on these results, we argue that to generalise properly, models not only need to capture factors of variation, but also understand how to invert the generative process that was used to generate the data.

READ FULL TEXT

page 5

page 18

page 19

page 20

page 21

page 22

page 23

page 24

research
02/21/2021

Do Generative Models Know Disentanglement? Contrastive Learning is All You Need

Disentangled generative models are typically trained with an extra regul...
research
07/14/2022

Comparing the latent space of generative models

Different encodings of datapoints in the latent space of latent-vector g...
research
04/27/2018

Disentangling Factors of Variation with Cycle-Consistent Variational Auto-Encoders

Generative models that learn disentangled representations for different ...
research
05/02/2023

Learning Disentangled Semantic Spaces of Explanations via Invertible Neural Networks

Disentangling sentence representations over continuous spaces can be a c...
research
04/16/2020

Classification Representations Can be Reused for Downstream Generations

Contrary to the convention of using supervision for class-conditioned ge...
research
02/13/2022

Unsupervised Disentanglement with Tensor Product Representations on the Torus

The current methods for learning representations with auto-encoders almo...
research
11/01/2017

Multi-View Data Generation Without View Supervision

The development of high-dimensional generative models has recently gaine...

Please sign up or login with your details

Forgot password? Click here to reset