DeepAI AI Chat
Log In Sign Up

Unsupervised Disentanglement with Tensor Product Representations on the Torus

by   Michael Rotman, et al.

The current methods for learning representations with auto-encoders almost exclusively employ vectors as the latent representations. In this work, we propose to employ a tensor product structure for this purpose. This way, the obtained representations are naturally disentangled. In contrast to the conventional variations methods, which are targeted toward normally distributed features, the latent space in our representation is distributed uniformly over a set of unit circles. We argue that the torus structure of the latent space captures the generative factors effectively. We employ recent tools for measuring unsupervised disentanglement, and in an extensive set of experiments demonstrate the advantage of our method in terms of disentanglement, completeness, and informativeness. The code for our proposed method is available at


page 5

page 11

page 12


Do Generative Models Know Disentanglement? Contrastive Learning is All You Need

Disentangled generative models are typically trained with an extra regul...

Set-Structured Latent Representations

Unstructured data often has latent component structure, such as the obje...

A Spectral Regularizer for Unsupervised Disentanglement

Generative models that learn to associate variations in the output along...

Chart Auto-Encoders for Manifold Structured Data

Auto-encoding and generative models have made tremendous successes in im...

Metrics for Exposing the Biases of Content-Style Disentanglement

Recent state-of-the-art semi- and un-supervised solutions for challengin...

Multifactor Sequential Disentanglement via Structured Koopman Autoencoders

Disentangling complex data to its latent factors of variation is a funda...

Lost in Latent Space: Disentangled Models and the Challenge of Combinatorial Generalisation

Recent research has shown that generative models with highly disentangle...